7
Evaluating and Disseminating Intervention Research

Efforts to change health behaviors should be guided by clear criteria of efficacy and effectiveness of the interventions. However, this has proved surprisingly complex and is the source of considerable debate.

The principles of science-based interventions cannot be overemphasized. Medical practices and community-based programs are often based on professional consensus rather than evidence. The efficacy of interventions can only be determined by appropriately designed empirical studies. Randomized clinical trials provide the most convincing evidence, but may not be suitable for examining all of the factors and interactions addressed in this report.

Information about efficacious interventions needs to be disseminated to practitioners. Furthermore, feedback is needed from practitioners to determine the overall effectiveness of interventions in real-life settings. Information from physicians, community leaders, public health officials, and patients are all-important for determining the overall effectiveness of interventions.

The preceding chapters review contemporary research on health and behavior from the broad perspectives of the biological, behavioral, and social sciences. A recurrent theme is that continued multidisciplinary and interdisciplinary efforts are needed. Enough research evidence has accumulated to warrant wider application of this information. To extend its



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 274
Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences 7 Evaluating and Disseminating Intervention Research Efforts to change health behaviors should be guided by clear criteria of efficacy and effectiveness of the interventions. However, this has proved surprisingly complex and is the source of considerable debate. The principles of science-based interventions cannot be overemphasized. Medical practices and community-based programs are often based on professional consensus rather than evidence. The efficacy of interventions can only be determined by appropriately designed empirical studies. Randomized clinical trials provide the most convincing evidence, but may not be suitable for examining all of the factors and interactions addressed in this report. Information about efficacious interventions needs to be disseminated to practitioners. Furthermore, feedback is needed from practitioners to determine the overall effectiveness of interventions in real-life settings. Information from physicians, community leaders, public health officials, and patients are all-important for determining the overall effectiveness of interventions. The preceding chapters review contemporary research on health and behavior from the broad perspectives of the biological, behavioral, and social sciences. A recurrent theme is that continued multidisciplinary and interdisciplinary efforts are needed. Enough research evidence has accumulated to warrant wider application of this information. To extend its

OCR for page 274
Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences use, however, existing knowledge must be evaluated and disseminated. This chapter addresses the complex relationship between research and application. The challenge of bridging research and practice is discussed with respect to clinical interventions, communities, public agencies, systems of health care delivery, and patients. During the early 1980s, the National Heart, Lung, and Blood Institute (NHLBI) and the National Cancer Institute (NCI) suggested a sequence of research phases for the development of programs that were effective in modifying behavior (Greenwald, 1984; Greenwald and Cullen, 1984; NHLBI, 1983): hypothesis generation (phase I), intervention methods development (phase II), controlled intervention trials (phase III), studies in defined populations (phase IV), and demonstration research (phase V). Those phases reflect the importance of methods development in providing a basis for large-scale trials and the need for studies of the dissemination and diffusion process as a means of identifying effective application strategies. A range of research and evaluation methods are required to address diverse needs for scientific rigor, appropriateness and benefit to the communities involved, relevance to research questions, and flexibility in cost and setting. Inclusion of the full range of phases from hypothesis generation to demonstration research should facilitate development of a more balanced perspective on the value of behavioral and psychosocial interventions. EVALUATING INTERVENTIONS Assessing Outcomes Choice of Outcome Measures The goals of health care are to increase life expectancy and improve health-related quality of life. Major clinical trials in medicine have evolved toward the documentation of those outcomes. As more trials documented effects on total mortality, some surprising results emerged. For example, studies commonly report that, compared with placebo, lipid-lowering agents reduce total cholesterol and low-density lipoprotein cholesterol, and might increase high-density lipoprotein cholesterol, thereby reducing the risk of death from coronary heart disease (Frick et al., 1987; Lipid Research Clinics Program, 1984). Those trials usually were not associated with reductions in death from all causes (Golomb, 1998; Muldoon

OCR for page 274
Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences et al, 1990). Similarly, He et al. (1999) demonstrated that intake of dietary sodium in overweight people was not related to the incidence of coronary heart disease but was associated with mortality form coronary heart disease. Another example can be found in the treatment of cardiac arrhythmia. Among adults who previously suffered a myocardial infarction, symptomatic cardiac arrhythmia is a risk factor for sudden death (Bigger, 1984). However, a randomized drug trial in 1455 post-infarction patients demonstrated that those who were randomly assigned to take an anti-arrhythmia drug showed reduced arrhythmia, but were significantly more likely to die from arrhythmia and from all causes than those assigned to take a placebo. If investigators had measured only heart rhythm changes, they would have concluded that the drug was beneficial. Only when primary health outcomes were considered was it established that the drug was dangerous (Cardiac Arrhythmia Suppression Trial (CAST) Investigators, 1989). Many behavioral intervention trials document the capacity of interventions to modify risk factors (NHLBI, 1998), but relatively few Level I studies measured outcomes of life expectancy and quality of life. As the examples above point out, assessing risk factors may not be adequate. Ramifications of interventions are not always apparent until they are fully evaluated. It is possible that a recommendation for a behavioral change could increase mortality through unforeseen consequences. For example, a recommendation of increased exercise might heighten the incidence of roadside auto fatalities. Although risk factor modification is expected to improve outcomes, assessment of increased longevity is essential. Measurement of mortality as an endpoint does necessitate long-duration trials that can incur greater costs. Outcome Measurement One approach to representing outcomes comprehensively is the quality-adjusted life year (QALY). QALY is a measure of life expectancy (Gold et al., 1996; Kaplan and Anderson, 1996) that integrates mortality and morbidity in terms of equivalents of well-years of life. If a woman expected to live to age 75 dies of lung cancer at 50, the disease caused 25 lost life-years. If 100 women with life expectancies of 75 die at age 50, 2,500 (100×25 years) life-years would be lost. But death is not the only outcome of concern. Many adults suffer from diseases that leave them more or less disabled for long periods. Although still alive, their quality of life is

OCR for page 274
Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences diminished. QALYs account for the quality-of-life consequences of illnesses. For example, a disease that reduces quality by one-half reduces QALY by 0.5 during each year the patient suffers. If the disease affects 2 people, it will reduce QALY by 1 (2×0.5) each year. A pharmaceutical treatment that improves life by 0.2 QALYs for 5 people will result in the equivalent of 1 QALY if the benefit is maintained over a 1-year period. The basic assumption is that 2 years scored as 0.5 each add to the equivalent of 1 year of complete wellness. Similarly, 4 years scored as 0.25 each are equivalent to 1 year of complete wellness. A treatment that boosts a patient’s health from 0.50 to 0.75 on a scale ranging from 0.0 (for death) to 1.0 (for the highest level of wellness) adds the equivalent of 0.25 QALY. If the treatment is applied to 4 patients, and the duration of its effect is 1 year, the effect of the treatment would be equivalent to 1 year of complete wellness. This approach has the advantage of considering benefits and side-effects of treatment programs in a common term. Although QALYs typically are used to assess effects on patients, they also can be used as a measure of effect on others, including caregivers who are placed at risk because their experience is stressful. Most important, QALYs are required for many methods of cost-effectiveness analysis. The most controversial aspect of the methodology is the method for assigning values along the scale. Three methods are commonly used: standard reference gamble, time-tradeoff, and rating scales. Economists and psychologists differ on their preferred approach to preference assessment. Economists typically prefer the standard gamble because it is consistent with the axioms of choice outlined in decision theory (Torrence, 1976). Economists also accept time-tradeoff because it represents choice even though it is not exactly consistent with the axioms derived from theory (Bennett and Torrence, 1996). However, evidence from experimental studies questions many of the assumptions that underlie economic models of choice. In particular, human evaluators do poorly at integrating complex probability information when making decisions involving risk (Tversky and Fox, 1995). Economic models often assume that choice is rational. However, psychological experiments suggest that methods commonly used for choice studies do not represent the true underlying preference continuum (Zhu and Anderson, 1991). Some evidence supports the use of simple rating scales (Anderson and Zalinski, 1990). Recently, research by economists has attempted to integrate studies from cognitive science, while psychologists have begun investigations of choice and decision-making (Tversky and Shafir, 1992). A significant body of studies demonstrates that differ-

OCR for page 274
Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences ent methods for estimating preferences will produce different values (Lenert and Kaplan, 2000). This happens because the methods ask different questions. More research is needed to clarify the best method for valuing health states. The weighting used for quality adjustment comes from surveys of patient or population groups, an aspect of the method that has generated considerable discussion among methodologists and ethicists (Kaplan, 1994). Preference weights are typically obtained by asking patients or people randomly selected from a community to rate cases that describe people in various states of wellness. The cases usually describe level of functioning and symptoms. Although some studies show small but significant differences in preference ratings between demographic groups (Kaplan, 1998), most studies have shown a high degree of similarity in preferences (see Kaplan, 1994, for review). A panel convened by the U.S. Department of Health and Human Services reviewed methodologic issues relevant to cost and utility analysis (the formal name for this approach) in health care. The panel concluded that population averages rather than patient group preference weights are more appropriate for policy analysis (Gold et al., 1996). Several authors have argued that resource allocation on the basis of QALYs is unethical (see La Puma and Lawlor, 1990). Those who reject the use of QALY suggest that QALY cannot be measured. However, the reliability and validity of quality-of-life measures are well documented (Spilker, 1996). Another ethical challenge to QALYs is that they force health care providers to make decisions based on cost-effectiveness rather than on the health of the individual patient. Another common criticism of QALYs is that they discriminate against the elderly and the disabled. Older people and those with disabilities have lower QALYs, so it is assumed that fewer services will be provided to them. However, QALYs consider the increment in benefit, not the starting point. Programs that prevent the decline of health status or programs that prevent deterioration and functioning among the disabled do perform well in QALY outcome analysis. It is likely that QALYs will not reveal benefits for heroic care at the very end of life. However, most people prefer not to take treatment that is unlikely to increase life expectancy or improve quality of life (Schneiderman et al., 1992). Ethical issues relevant to the use of cost-effectiveness analysis are considered in detail in the report of the Panel on Cost-Effectiveness in Health and Medicine (Gold et al., 1996).

OCR for page 274
Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences Evaluating Clinical Interventions Behavioral interventions have been used to modify behaviors that put people at risk for disease, to manage disease processes, and to help patients cope with their health conditions. Behavioral and psychosocial interventions take many forms. Some provide knowledge or persuasive information; others involve individual, family, group, or community programs to change or support changes in health behaviors (such as in tobacco use, physical activity, or diet); still others involve patient or health care provider education to stimulate behavior change or risk-avoidance. Behavioral and psychosocial interventions are not without consequence for patients and their families, friends, and acquaintances; interventions cost money, take time, and are not always enjoyable. Justification for interventions requires assurance that the changes advocated are valuable. The kinds of evidence required to evaluate the benefits of interventions are discussed below. Evidence-Based Medicine Evidence-based medicine uses the best available scientific evidence to inform decisions about what treatments individual patients should receive (Sackett et al., 1997). Not all studies are equally credible. Last (1995) offered a hierarchy of clinical research evidence, shown in Table 7-1. Level I, the most rigorous, is reserved for the randomized clinical trials (RCT), in which participants are randomly assigned to the experimental condition or to a meaningful comparison condition—the most widely accepted standard for evaluating interventions. Such trials involve TABLE 7-1 Research Evidence Hierarchy Level Element I. Randomized controlled trial II. Controlled trial without randomization Cohort or case control analytic study Multiple time series Uncontrolled experiment with dramatic results III. Case study Expert opinion   SOURCE: Last, 1995, by permission of Lancet Ltd. All rights reserved.

OCR for page 274
Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences either “single blinding” (investigators know which participants are assigned to the treatment and groups but participants do not) or “double blinding” (neither the investigators nor the participants know the group assignments) (Friedman et al., 1985). Double blinding is difficult in behavioral intervention trials, but there are some good examples of single-blind experiments. Reviews of the literature often grade studies according to levels of evidence. Level I evidence is considered more credible than Level II evidence; Level III evidence is given little weight. There has been concern about the generalizability of RCTs (Feinstien and Horwitz, 1997; Horwitz, 1987a,b; Horwitz and Daniels, 1996; Horwitz et al., 1996, 1990; Rabeneck et al., 1992), specifically because the recruitment of participants can result in samples that are not representative of the population (Seligman, 1996). There is a trend toward increased heterogeneity of the patient population in RCTs. Even so, RCTs often include stringent criteria for participation that can exclude participants on the basis of comorbid conditions or other characteristics that occur frequently in the population. Furthermore, RCTs are often conducted in specialized settings, such as university-based teaching hospitals, that do not draw representative population samples. Trials sometimes exhibit large dropout rates, which further undermine the generalizability of their findings. Oldenburg and colleagues (1999) reviewed all papers published in 1994 in 12 selected journals on public health, preventive medicine, health behavior, and health promotion and education. They graded the studies according to evidence level: 2% were Level I RCTs and 48% were Level II. The authors expressed concern that behavioral research might not be credible when evaluated against systematic experimental trials, which are more common in other fields of medicine. Studies with more rigorous experimental designs are less likely to demonstrate treatment effectiveness (Heaney and Goetzel, 1997; Mosteller and Colditz, 1996). Although there have been relatively few behavioral intervention trials, those that have been published have supported the efficacy of behavioral interventions in a variety of circumstances, including smoking, chronic pain, cancer care, and bulimia nervosa (Compas et al., 1998). Efficacy and Effectiveness Efficacy is the capacity of an intervention to work under controlled conditions. Randomized clinical trials are essential in establishing the ef-

OCR for page 274
Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences fects of a clinical intervention (Chambless and Hollon, 1998) and in determining that an intervention can work. However, demonstration of efficacy in an RCT does not guarantee that the treatment will be effective in actual practice settings. For example, some reviews suggest that behavioral interventions in psychotherapy are generally beneficial (Matt and Navarro, 1997), others suggest that interventions are less effective in clinical settings than in the laboratory (Weisz et al., 1992), and others find particular interventions equally effective in experimental and clinical settings (Shadish et al., 1997). The Division of Clinical Psychology of the American Psychological Association recently established criteria for “empirically supported” psychological treatments (Chambless and Hollon, 1998). In an effort to establish a level of excellence in validating the efficacy of psychological interventions the criteria are relatively stringent. A treatment is considered empirically supported if it is found to be more effective than either an alternative form of treatment or a credible control condition in at least two RCTs. The effects must be replicated by at least two independent laboratories or investigative teams to ensure that the effects are not attributable to special characteristics of a specific investigator or setting. Several health-related behavior change interventions meeting those criteria have been identified, including interventions for management of chronic pain, smoking cessation, adaptation to cancer, and treatment of eating disorders (Compas et al., 1998). An intervention that has failed to meet the criteria still has potential value and might represent important or even landmark progress in the field of health-related behavior change. As in many fields of health care, there historically has been little effort to set standards for psychological treatments for health-related problems or disease. Recently, however, managed-care and health maintenance organizations have begun to monitor and regulate both the type and the duration of psychological treatments that are reimbursed. A common set of criteria for making coverage decisions has not been articulated, so decisions are made in the absence of appropriate scientific data to support them. It is in the best interest of the public and those involved in the development and delivery of health-related behavior change interventions to establish criteria that are based on the best available scientific evidence. Criteria for empirically supported treatments are an important part of that effort.

OCR for page 274
Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences Evaluating Community-Level Interventions Evaluating the effectiveness of interventions in the communities requires different methods. Developing and testing interventions that take a more comprehensive, ecologic approach, and that are effective in reducing risk-related behaviors and influencing the social factors associated with health status, require many levels and types of research (Flay, 1986; Green et al., 1995; Greenwald and Cullen, 1984). Questions have been raised about the appropriateness of RCTs for addressing research questions when the unit of analysis is larger than the individual, such as a group, organization, or community (McKinlay, 1993; Susser, 1995). While this discussion uses the community as the unit of analysis, similar principles apply to interventions aimed at groups, families, or organizations. Review criteria of community interventions have been suggested by Hancock and colleagues (Hancock et al., 1997). Their criteria for rigorous scientific evaluation of community intervention trials include four domains: (1) design, including the randomization of communities to condition, and the use of sampling methods that assure representativeness of the entire population; (2) measures, including the use of outcome measures with demonstrated validity and reliability and process measures that describe the extent to which the intervention was delivered to the target audience; (3) analysis, including consideration of both individual variation within each community and community-level variation within each treatment condition; and (4) specification of the intervention in enough detail to allow replication. Randomization of communities to various conditions raises challenges for intervention research in terms of expense and statistical power (Koepsell et al., 1995; Murray, 1995). The restricted hypotheses that RCTs test cannot adequately consider the complexities and multiple causes of human behavior and health status embedded within communities (Israel et al., 1995; Klitzner, 1993; McKinlay, 1993; Susser, 1995). A randomized controlled trial might actually alter the interaction between an intervention and a community and result in an attenuation of the effectiveness of the intervention (Fisher, 1995; McKinlay, 1993). At the level of community interventions, experimental control might not be possible, especially when change is unplanned. That is, given the different sociopolitical structures, cultures, and histories of communities and the numerous factors that are beyond a researcher’s ability to control, it might be impossible to identify and maintain a commensurate comparison community (Green et al., 1996; Hollister and Hill, 1995; Israel et al., 1995; Klitzner, 1993;

OCR for page 274
Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences Mittelmark et al., 1993; Susser, 1995). Using a control community does not completely solve the problem of comparison, however, because one “cannot assume that a control community will remain static or free of influence by national campaigns or events occurring in the experimental communities” (Green et al., 1996, p. 274). Clear specification of the conceptual model guiding a community intervention is needed to clarify how an intervention is expected to work (Koepsell, 1998; Koepsell et al., 1992). This is the contribution of the Theory of Change model for communities described in Chapter 6. A theoretical framework is necessary to specify mediating mechanisms and modifying conditions. Mediating mechanisms are pathways, such as social support, by which the intervention induces the outcomes; modifying conditions, such as social class, are not affected by the intervention but can influence outcomes independently. Such an approach offers numerous advantages, including the ability to identify pertinent variables and how, when, and in whom they should be measured; the ability to evaluate and control for sources of extraneous variance; and the ability to develop a cumulative knowledge base about how and when programs work (Bickman, 1987; Donaldson et al., 1994; Lipsey, 1993; Lipsey and Polard, 1989). When an intervention is unsuccessful at stimulating change, data on mediating mechanisms can allow investigators to determine whether the failure is due to the inability of the program to activate the causal processes that the theory predicts or to an invalid program theory (Donaldson et al., 1994). Small-scale, targeted studies sometimes provide a basis for refining large-scale intervention designs and enhance understanding of methods for influencing group behavior and social change (Fisher, 1995; Susser, 1995; Winkleby, 1994). For example, more in-depth, comparative, multiple-case-study evaluations are needed to explain and identify lessons learned regarding the context, process, impacts, and outcomes of community-based participatory research (Israel et al., 1998). Community-Based Participatory Research and Evaluation As reviewed in Chapter 4, broad social and societal influences have an impact on health. This concept points to the importance of an approach that recognizes individuals as embedded within social, political, and economic systems that shape their behaviors and constrain their access to resources necessary to maintain their health (Brown, 1991;

OCR for page 274
Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences Gottlieb and McLeroy, 1994; Krieger, 1994; Krieger et al., 1993; Lalonde, 1974; Lantz et al., 1998; McKinlay, 1993; Sorensen et al., 1998a, b; Stokols, 1992, 1996; Susser and Susser, 1996a,b; Williams and Collins, 1995; World Health Organization [WHO], 1986). It also points to the importance of expanding the evaluation of interventions to incorporate such factors (Fisher, 1995; Green et al., 1995; Hatch et al., 1993; Israel et al., 1995; James, 1993; Pearce, 1996; Sorensen et al., 1998a,b; Steckler et al., 1992; Susser, 1995). This is exemplified by community-based participatory programs, which are collaborative efforts among community members, organization representatives, a wide range of researchers and program evaluators, and others (Israel et al., 1998). The partners contribute “unique strengths and shared responsibilities” (Green et al., 1995, p. 12) to enhance understanding of a given phenomenon, and they integrate the knowledge gained from interventions to improve the health and well-being of community members (Dressler, 1993; Eng and Blanchard, 1990–1; Hatch et al., 1993; Israel et al., 1998; Schulz et al., 1998a). It provides “the opportunity…for communities and science to work in tandem to ensure a more balanced set of political, social, economic, and cultural priorities, which satisfy the demands of both scientific research and communities at higher risk” (Hatch et al., 1993, p. 31). The advantages and rationale of community-based participatory research are summarized in Table 7–2 (Israel et al., 1998). The term “community-based participatory research,” is used here to clearly differentiate from “community-based research,” which is often used in reference to research that is placed in the community but in which community members are not actively involved. Table 7-3 presents a set of principles, or characteristics, that capture the important components of community-based participatory research and evaluation (Israel et al., 1998). Each principle constitutes a continuum and represents a goal, for example, equitable participation and shared control over all phases of the research process (Cornwall, 1996; Dockery, 1996; Green et al., 1995). Although the principles are presented here as distinct items, community-based participatory research integrates them. There are four major foci of evaluation with implications for research design: context, process, impact, and outcome (Israel, 1994; Israel et al., 1995; Simons-Morton et al., 1995). A comprehensive community-based participatory evaluation would include all types, but it is often financially practical to pursue only one or two. Evaluation design is extensively reviewed in the literature (Campbell and Stanley, 1963; Cook and

OCR for page 274
Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences Kass, D. and Freudenberg, N. (1997). Coalition building to prevent childhood lead poisoning: A case study from New York City. In M.Minkler (Ed.), Community Organizing and Community Building for Health (pp. 278–288). New Brunswick, NJ: Rutgers University Press. Kegler, M.C., Steckler, A., Malek, S.H., and McLeroy, K. (1998a). A multiple case study of implementation in 10 local Project ASSIST coalitions in North Carolina. Health Education Research, 13, 225–238. Kegler, M.C., Steckler, A., McLeroy, K., and Malek, S.H. (1998b). Factors that contribute to effective community health promotion coalitions: A study of 10 Project ASSIST coalitions in North Carolina. American Stop Smoking Intervention Study for Cancer Prevention. Health Education and Behavior, 25, 338–353. Klein, D.C. (1968). Community Dynamics and Mental Health. New York: Wiley. Klitzner, M. (1993). A public health/dynamic systems approach to community-wide alcohol and other drug initiatives. In R.C.Davis, A.J.Lurigo, and D.P.Rosenbaum (Eds.) Drugs and the Community (pp. 201–224). Springfield, IL: Charles C.Thomas. Koepsell, T.D. (1998). Epidemiologic issues in the design of community intervention trials. In R.Brownson, and D.Petitti (Eds.) Applied Epidemiology: Theory To Practice (pp. 177–212). New York: Oxford University Press. Koepsell, T.D., Diehr, P.H., Cheadle, A., and Kristal, A. (1995). Invited commentary: Symposium on community intervention trials. American Journal of Epidemiology, 142, 594–599. Koepsell, T.D., Wagner, E.H., Cheadle, A.C., Patrick, D.L., Martin, D.C., Diehr, P.H., Perrin, E.B., Kristal, A.R., Allan-Andrilla, C.H., and Dey, L.J. (1992). Selected methodological issues in evaluating community-based health promotion and disease prevention programs. Annual Review of Public Health, 13, 31–57. Kong, A., Barnett, G.O., Mosteller, F., and Youtz, C. (1986). How medical professionals evaluate expressions of probability. New England Journal of Medicine, 315, 740–744. Kraus, J.F. (1985). Effectiveness of measures to prevent unintentional deaths of infants and children from suffocation and strangulation. Public Health Report, 100, 231–240. Kraus, J.F., Peek, C., McArthur, D.L., and Williams, A. (1994). The effect of the 1992 California motorcycle helmet use law on motorcycle crash fatalities and injuries. Journal of the American Medical Association, 272, 1506–1511. Krieger, N. (1994). Epidemiology and the web of causation: Has anyone seen the spider? Social Science and Medicine, 39, 887–903. Krieger, N., Rowley, D.L, Herman, A.A., Avery, B., and Phillips, M.T. (1993). Racism, sexism and social class: Implications for studies of health, disease and well-being. American Journal of Preventive Medicine, 9, 82–122. La Puma, J. and Lawlor, E.F. (1990). Quality-adjusted life-years. Ethical implications for physicians and policymakers. Journal of the American Medical Association 263, 2917– 2921. Labonte, R. (1994). Health promotion and empowerment: reflections on professional practice . Health Education Quarterly, 21, 253–268. Lalonde, M. (1974). A new perspective on the health of Canadians. Ottawa, ON: Ministry of Supply and Services. Lando, H.A., Pechacek, T.F., Pirie, P.L., Murray, D.M., Mittelmark, M.B., Lichtenstein, E., Nothwehyr, F., and Gray, C. (1995). Changes in adult cigarette smoking in the Minnesota Heart Health Program. American Journal of Public Health, 85, 201–208.

OCR for page 274
Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences Lantz, P.M., House, J.S., Lepkowski, J.M., Williams, D.R., Mero, R.P., and Chen, J. (1998). Socioeconomic factors, health behaviors, and mortality. Journal of the American Medical Association, 279, 1703–1708. Last, J. (1995). Redefining the unacceptable. Lancet, 346, 1642–1643. Lather, P. (1986). Research as praxis. Harvard Educational Review, 56, 259–277. Lenert, L., and Kaplan, R.M. (2000). Validity and interpretation of preference-based measures of health-related quality of life. Medical Care, 38, 138–150. Leventhal, H. and Cameron, L. (1987). Behavioral theories and the problem of compliance. Patient Education and Counseling, 10, 117–138. Levine, D.M, Becker, D.M, Bone, L.R, Stillman, F.A, Tuggle II, M.B., Prentice, M., Carter, J., and Filippeli, J. (1992). A partnership with minority populations: A community model of effectiveness research. Ethnicity and Disease, 2, 296–305. Lewin, K. (1951) Field Theory in Social Science. New York: Harper. Lewis, C.E. (1988). Disease prevention and health promotion practices of primary care physicians in the United States. American Journal of Preventive Medicine, 4, 9–16. Liao, L., Jollis, J.G., DeLong, E.R., Peterson, E.D., Morris, K.G., and Mark, D.B. (1996). Impact of an interactive video on decision making of patients with ischemic heart disease. Journal of General Internal Medicine, 11, 373–376. Lichter, A.S., Lippman, M.E., Danforth, D.N., Jr., d’Angelo, T., Steinberg, S.M., deMoss, E., MacDonald, H.D., Reichert, C.M., Merino, M., Swain, S.M., et al. (1992). Mastectomy versus breast-conserving therapy in the treatment of stage I and II carcinoma of the breast: A randomized trial at the National Cancer Institute. Journal of Clinical Oncokgy, 10, 976–983. Lillie-Blanton, M. and Hoffman, S.C. (1995). Conducting an assessment of health needs and resources in a racial/ethnic minority community. Health Services Research, 30, 225–236. Lincoln, Y.S. and Reason, P. (1996). Editor’s introduction. Qualitative Inquiry, 2, 5–11. Linville, P.W., Fischer, G.W., and Fischhoff, B. (1993). AIDS risk perceptions and decision biases. In J.B.Pryor and G.D.Reeder (Eds.) The Social Psychology of HIV Infection (pp. 5–38). Hillsdale, NJ: Lawrence Erlbaum. Lipid Research Clinics Program. (1984). The Lipid Research Clinics Coronary Primary Prevention Trial results. I. Reduction in incidence of coronary heart disease. Journal of the American Medical Association, 251, 351–364. Lipkus, I.M. and Hollands, J.G. (1999). The visual communication of risk. Journal of National Cancer Institute Monographs, 25, 149–162. Lipsey, M.W. (1993). Theory as method: Small theories of treatments. New Direction in Program Evaluation, 57, 5–38. Lipsey, M.W. and Polard, J.A. (1989). Driving toward theory in program evaluation: More models to choose from. Evaluation and Program Planning, 12, 317–328. Lund, A.K., Williams, A.F., and Womack, K.N. (1991). Motorcycle helmet use in Texas. Public Health Reports, 106, 576–578. Maguire, P. (1987). Doing Participatory Research: A Feminist Approach. School of Education, Amherst, MA: The University of Massachusetts. Maguire, P. (1996). Considering more feminist participatory research: What’s congruency got to do with it? Qualitative Inquiry, 2, 106–118. Marin, G. and Marin, B.V. (1991). Research with Hispanic Populations. Newbury Park, CA: Sage.

OCR for page 274
Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences Matt, G.E. and Navarro, A.M. (1997). What meta-analyses have and have not taught us about psychotherapy effects: A review and future directions. Clinical Psychology Review, 17, 1–32. Mazur, D.J. and Hickam, D.H. (1997). Patients’ preferences for risk disclosure and role in decision making for invasive medical procedures. Journal of General Internal Medicine, 12, 114–117. McGraw, S.A., Stone, E.J., Osganian, S.K., Elder, J.P., Perry, C.L., Johnson, C.C., Parcel, G.S., Webber, L.S., and Luepker, R.V. (1994). Design of process evaluation within the child and adolescent trial for cardiovascular health (CATCH). Health Education Quarterly, S5–S26. McIntyre, S. and West, P. (1992). What does the phrase “safer sex” mean to you? AIDS, 7, 121–126. McKay, H.G., Feil, E.G., Glasgow, R.E., and Brown, J.E. (1998). Feasibility and use of an internet support service for diabetes self-management. The Diabetes Educator, 24, 174– 179. McKinlay, J.B. (1993). The promotion of health through planned sociopolitical change: challenges for research and policy. Social Science and Medicine, 36, 109–117. McKnight, J.L. (1987). Regenerating community. Social Policy, 17, 54–58. McKnight, J.L. (1994). Politicizing health care. In P.Conrad, and R.Kern (Eds.) The Sociology Of Health And Illness: Critical Perspectives, 4th Edition (pp. 437–441). New York: St. Martin’s. McVea, K., Crabtree, B.F., Medder, J.D., Susman, J.L., Lukas, L., McIlvain, H.E., Davis, C.M., Gilbert, C.S., and Hawver, M. (1996). An ounce of prevention? Evaluation of the ‘Put Prevention into Practice’ program. Journal of Family Practice, 43, 361–369. Merz, J., Fischhoff, B., Mazur, D.J., and Fischbeck, P.S. (1993). Decision-analytic approach to developing standards of disclosure for medical informed consent. Journal of Toxics and Liability, 15, 191–215. Minkler, M. (1989). Health education, health promotion and the open society: An historical perspective. Health Education Quarterly, 16, 17–30. Mittelmark, M.B., Hunt, M.K., Heath, G.W., and Schmid, T.L. (1993). Realistic outcomes: Lessons from community-based research and demonstration programs for the prevention of cardiovascular diseases. Journal of Public Health Policy, 14, 437–462. Monahan, J.L. and Scheirer, M.A. (1988). The role of linking agents in the diffusion of health promotion programs. Health Education Quarterly, 15, 417–434. Morgan, M.G. (1995). Fields from Electric Power [brochure]. Pittsburgh, PA: Department of Engineering and Public Policy, Carnegie Mellon University. Morgan, M.G., Fischhoff, B., Bostrom, A., and Atman, C. (2001). Risk Communication: The Mental Models Approach. New York: Cambridge University Press. Mosteller, F. and Colditz, G.A. (1996). Understanding research synthesis (meta-analysis). Annual Review of Public Health, 17, 1–23. Muldoon, M.F., Manuck, S.B., and Matthews, K.A. (1990). Lowering cholesterol concentrations and mortality: A quantitative review of primary prevention trials. British Medical Journal, 301, 309–314. Murray, D. (1995). Design and analysis of community trials: Lessons from the Minnesota Heart Health Program. American Journal of Epidemilogy, 142, 569–575. Murray, D.M. (1986). Dissemination of community health promotion programs: The Fargo-Moorhead Heart Health Program. Journal of School Health, 56, 375–381.

OCR for page 274
Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences Myers, A.M, Pfeiffle, P, and Hinsdale, K. (1994). Building a community-based consortium for AIDS patient services. Public Health Reports, 109, 555–562. National Research Council, Committee on Risk Perception and Communication. (1989). Improving Risk Communication. Washington, DC: National Academy Press. NHLBI (National Heart, Lung, and Blood Institute). (1983). Guidelines for Demonstration And Education Research Grants. Washington, DC: National Institutes of Health. NHLBI (National Heart, Lung, and Blood Institute). (1998). Report of the Task Force on Behavioral Research in Cardiovascular, Lung, and Blood Health and Disease. Bethesda, MD: National Institutes of Health. Ni, H., Sacks, J.J., Curtis, L., Cieslak, P.R., and Hedberg, K. 1997. Evaluation of a statewide bicycle helmet law via multiple measures of helmet use. Archives of Pediatric and Adolescent Medicine, 151, 59–65. Nyden, P.W. and Wiewel, W. (1992). Collaborative research: harnessing the tensions between researcher and practitioner. American Sociologist, 24, 43–55. O’Connor, P.J., Solberg, L.I., and Baird, M. (1998). The future of primary care. The enhanced primary care model. Journal of Family Practice, 47, 62–67. Office of Technology Assessment, U.S. Congress. (1981). Cost-Effectiveness of Influenza Vaccination. Washington, DC: Office of Technology Assessment. Oldenburg, B., French, M., and Sallis, J.F. (1999). Health behavior research: The quality of the evidence base. Paper presented at the Society of Behavioral Medicine Twentieth Annual Meeting, San Diego, CA. Orlandi, M.A (1996a). Health Promotion Technology Transfer: Organizational Perspectives. Canadian Journal of Public Health, 87, Supplement 2, 528–533. Orlandi, M.A. (1996b). Prevention Technologies for Drug-Involved Youth. In J.Inciardi, L.Metsch, and C.McCoy (Eds.) Intervening with Drug-Involved Youth: Prevention, Treatment, and Research (pp. 81–100). Newbury Park, CA: Sage Publications. Orlandi, M.A. (1986). The diffusion and adoption of worksite health promotion innovations: An analysis of barriers. Preventive Medicine, 15, 522–536. Parcel, G.S, Eriksen, M.P, Lovato, C.Y., Gottlieb, N.H., Brink, S.G., and Green, L.W (1989). The diffusion of school-based tobacco-use prevention programs: Program description and baseline data. Health Education Research, 4, 111–124. Parcel, G.S, O’Hara-Tompkins, N.M, Harris, R.B., Basen-Engquist, K.M., McCormick, L.K., Gottlieb, N.H., and Eriksen, M.P. (1995). Diffusion of an Effective Tobacco Prevention Program. II. Evaluation of the Adoption Phase. Health Education Research, 10, 297–307. Parcel, G.S, Perry, C.L, and Taylor W.C. (1990). Beyond Demonstration: Diffusion of Health Promotion Innovations. In N.Bracht (Ed.), Health Promotion at the Community Level (pp. 229–251). Thousand Oaks, CA: Sage Publications. Parcel, G.S., Simons-Morton, B.G., O’Hara, N.M,. Baranowski, T., and Wilson, B. (1989). School promotion of healthful diet and physical activity: Impact on learning outcomes and self-reported behavior. Health Education Quarterly, 16, 181–199. Park, P., Brydon-Miller, M., Hall, B., and Jackson, T. (Eds.) (1993). Voices of Change: Participatory Research in the United States and Canada. Westport, CT: Bergin and Garvey. Parker, E.A., Schulz, A.J., Israel, B.A., and Hollis, R. (1998). East Side Village Health Worker Partnership: Community-based health advisor intervention in an urban area. Health Education and Behavior, 25, 24–45.

OCR for page 274
Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences Parsons, T. (1951). The Social System. Glencoe, IL: Free Press. Patton, M.Q. (1987). How to Use Qualitative Methods In Evaluation. Newbury Park, CA: Sage Publications. Patton, M.Q. (1990). Qualitative Evaluation And Research Methods, 2nd Edition. Newbury Park, CA: Sage Publications. Pearce, N. (1996). Traditional epidemiology, modern epidemiology and public health. American Journal of Public Health, 86, 678–683. Pendleton, L. and House, W.C. (1984). Preferences for treatment approaches in medical care. Medical Care, 22, 644–646. Pentz, M.A. (1998). Research to practice in community-based prevention trials. Preventive intervention research at the crossroads: contributions and opportunities from the behavioral and social sciences. Programs and Abstracts (pp. 82–83). Bethesda, MD. Pentz, M.A., and Trebow, E. (1997). Implementation issues in drug abuse prevention research. Substance Use and Misuse, 32, 1655–1660. Pentz, M.A., Trebow, E., Hansen, W.B., MacKinnon, D.P., Dwyer, J.H., Flay, B.R., Daniels, S., Cormack, C., and Johnson, C.A. (1990). Effects of program implementation on adolescent drug use behavior: The Midwestern Prevention Project (MPP). Evaluation Review, 14, 264–289. Perry, C.L. (1999). Cardiovascular disease prevention among youth: Visioning the future. Preventive Medicine, 29, S79–S83. Perry, C.L., Murray, D.M, and Griffin, G. (1990). Evaluating the statewide dissemination of smoking prevention curricula: Factors in teacher compliance. Journal of School Health, 60, 501–504. Plough, A. and Olafson, F. (1994). Implementing the Boston Healthy Start Initiative: A case study of community empowerment and public health. Health Education Quarterly, 21, 221–234. Price, R.H. (1989). Prevention programming as organizational reinvention: From research to implementation. In M.M.Silverman and V.Anthony (Eds.) Prevention of Mental Disorders, Alcohol and Drug Use in Children and Adolescents (pp. 97–123). Rockville, MD: Department of Health and Human Services. Price, R.H. (1998). Theory guided reinvention as the key high fidelity prevention practice. Paper presented at the National Institute of Health meeting, “Preventive Intervention Research at the Crossroads: Contributions and Opportunities from the Behavioral and Social Sciences,” Bethesda, MD. Pronk, N.P. and O’Connor, P.J. (1997). Systems approach to population health improvement. Journal of Ambulatory Care Management, 20, 24–31. Putnam, R.D. (1993). Making Democracy Work: Civic Traditions in Modern Italy. Princeton: Princeton University. Rabeneck, L., Viscoli, C.M., and Horwitz, R.I. (1992). Problems in the conduct and analysis of randomized clinical trials. Are we getting the right answers to the wrong questions? Archives of Internal Medicine, 152, 507–512. Raiffa, H. (1968). Decision Analysis. Reading, MA: Addison-Wesley. Reason, P. (1994). Three approaches to participative inquiry. In N.K.Denzin and Y.S. Lincoln (Eds.) Handbook of Qualitative Research (pp. 324–339). Thousand Oaks, CA: Sage.

OCR for page 274
Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences Reason, P. (Ed.). (1988). Human Inquiry in Action: Developments in New Paradigm Research. London: Sage. Reichardt, C.S. and Cook, T.D. (1980). “Paradigms Lost”: Some thoughts on choosing methods in evaluation research. Evaluation and Program Planning: An International Journal 3, 229–236. Rivara, F.P., Grossman, D.C., and Cummings, P. (1997a). Injury prevention. First of two parts. New England Journal of Medicine, 337, 543–548. Rivara, F.P., Grossman, D.C., Cummings P. (1997b). Injury prevention. Second of two parts. New England Journal of Medicine, 337, 613–618. Roberts-Gray, C., Solomon, T., Gottlieb, N., and Kelsey, E. (1998). Heart partners: A strategy for promoting effective diffusion of school health promotion programs. Journal of School Health, 68, 106–116. Robertson, A. and Minkler, M. (1994). New health promotion movement: A critical examination. Health Education Quarterly, 21, 295–312. Rogers, E.M. (1983). Diffusion of Innovations, 3rd ed. New York: The Free Press. Rogers, E.M. (1995). Communication of Innovations. New York: The Free Press. Rogers, G.B. (1996). The safety effects of child-resistant packaging for oral prescription drugs. Two decades of experience. Journal of the American Medical Association, 275, 1661–1665. Rohrbach, L.A, D’Onofrio, C., Backer, T., and Montgomery, S. (1996). Diffusion of school’ based substance abuse prevention programs. American Behavioral Scientist, 39, 919– 934. Rossi, P.H. and Freeman, H.E. (1989). Evaluation: A Systematic Approach, 4th Edition. Newbury Park, CA: Sage Publications. Rutherford, G.W. (1998). Public health, communicable diseases, and managed care: Will managed care improve or weaken communicable disease control? American Journal of Preventive Medicine, 14, 53–59. Sackett, D.L., Richardson, W.S., Rosenberg, W., and Haynes, R.B. (1997) Evidence-Based Medicine: How to Practice and Teach EBM. New York: Churchill Livingstone. Sarason, S.B. (1984). The Psychological Sense of Community: Prospects for a Community Psychology. San Francisco: Jossey-Bass. Schein, E.H. (1987). Process Consulting. Reading, MA: Addition Wesley. Schensul, J.J., Denelli-Hess, D., Borreo, M.G., and Bhavati, M.P. (1987). Urban comadronas: Maternal and child health research and policy formulation in a Puerto Rican community. In D.D.Stull and J.J.Schensul (Eds.) Collaborative Research and Social Change: Applied Anthropology in Action (pp. 9–32). Boulder, CO: Westview. Schensul, S.L. (1985). Science, theory and application in anthropology. American Behavioral Scientist, 29, 164–185. Schneiderman, L.J., Kronick, R., Kaplan, R.M., Anderson, J.P., and Langer, R.D. (1992). Effects of offering advance directives on medical treatments and costs. Annals of Internal Medicine 117, 599–606. Schriver, K.A. (1989). Evaluating text quality: The continuum from text-focused to reader-focused methods. IEEE Transactions on Professional Communication, 32, 238–255. Schulz, A.J, Israel, B.A, Selig, S.M., and Bayer, I.S. (1998a). Development and implementation of principles for community-based research in public health. In R.H. Macnair (Ed.) Research Strategies For Community Practice (pp. 83–110). New York: Haworth Press.

OCR for page 274
Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences Schulz, A.J., Parker, E.A., Israel, B.A, Becker, A.B., Maciak, B., and Hollis, R. (1998b). Conducting a participatory community-based survey: Collecting and interpreting data for a community health intervention on Detroit’s East Side. Journal of Public Health Management Practice, 4, 10–24. Schwartz, L.M., Woloshin, S., Black, W.C., and Welch, H.G. (1997). The role of numeracy in understanding the benefit of screening mammography. Annals of Internal Medicine, 127, 966–972. Schwartz, N. (1999). Self-reports: How the questions shape the answer. American Psychologist, 54, 93–105. Seligman M.E. (1996). Science as an ally of practice. American Psychologist, 51, 1072– 1079. Shadish, W.R., Cook, T.D., and Leviton, L.C. (1991). Foundations of Program Evaluation. Newbury Park, CA: Sage Publications. Shadish, W.R., Matt, G.E., Navarro, A.M., Siegle, G., Crits-Christoph, P., Hazelrigg, M.D., Jorm, A.F., Lyons, L.C., Nietzel, M.T., Prout, H.T., Robinson, L., Smith, M.L., Svartberg, M., and Weiss, B. (1997). Evidence that therapy works in clinically representative conditions. Journal of Consulting and Clinical Psychology, 65, 355–365. Sharf, B.F. (1997). Communicating breast cancer on-line: Support and empowerment on the internet. Women and Health, 26, 65–83. Simons-Morton, B.G., Green, W.A., and Gottlieb, N. (1995). Health Education and Health Promotion, 2nd Edition. Prospect Heights, IL: Waveland. Simons-Morton, B.G., Parcel, G.P., Baranowski, T., O’Hara, N., and Forthofer, R. (1991). Promoting a healthful diet and physical activity among children: Results of a school-based intervention study. American Journal of Public Health, 81, 986–991. Singer, M. (1993). Knowledge for use: Anthropology and community-centered substance abuse research . Social Science and Medicine, 37, 15–25. Singer, M. (1994). Community-centered praxis: Toward an alternative non-dominative applied anthropology. Human Organization, 53, 336–344. Smith, D.W., Steckler, A., McCormick, L.K., and McLeroy, K.R. (1995). Lessons learned about disseminating health curricula to schools. Journal of Health Education, 26, 37– 43. Smithies, J. and Adams, L. (1993). Walking the tightrope. In J.K.Davies, and M.P.Kelly (Eds.) Healthy Cities: Research and Practice (pp. 55–70). New York: Routledge. Solberg, L.I., Kottke, T.E, and Brekke, M.L. (1998a). Will primary care clinics organize themselves to improve the delivery of preventive services? A randomized controlled trial. Preventive Medicine, 27, 623–631. Solberg, L.I., Kottke, T.E., Brekke, M.L., Conn, S.A., Calomeni, C.A., and Conboy, K.S. (1998b). Delivering clinical preventive services is a systems problem. Annals of Behavioral Medicine, 19, 271–278. Sorensen, G., Emmons, K., Hunt, M.K., and Johnston, D. (1998a). Implications of the results of community intervention trials. Annual Rreview of Public Health, 19, 379– 416. Sorensen, G., Thompson, B., Basen-Engquist, K., Abrams, D., Kuniyuki, A., DiClemente, C., and Biener, L. (1998b). Durability, dissemination and institutionalization of worksite tobacco control programs: Results from the Working Well Trial. International Journal of Behavioral Medicine, 5, 335–351.

OCR for page 274
Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences Spilker, B. (1996). Quality of Life and Pharmacoeconomics. In B.Spilker (Ed) Clinical Trials 2nd Edition. Philadelphia: Lippincott-Raven. Steckler, A., Goodman, R.M., McLeroy, K.R., Davis, S., and Koch, G. (1992). Measuring the diffusion of innovative health promotion programs. American Journal of Health Promotion, 6, 214–224. Steckler, A.B., Dawson, L., Israel, B.A., and Eng, E. (1993). Community health development: An overview of the works of Guy W.Steuart. Health Education Quarterly, Suppl. 1, S3–S20. Steckler, A.B., McLeroy, K.R., Goodman, R.M., Bird, S.T., and McCormick, L. (1992). Toward integrating qualitative and quantitative methods: an introduction. Health Education Quarterly, 19, 1–8. Steuart, G.W. (1993). Social and cultural perspectives: Community intervention and mental health. Health Education Quarterly, S99. Stokols, D. (1992). Establishing and maintaining healthy environments: Toward a social ecology of health promotion. American Psychologist, 47, 6–22. Stokols, D. (1996). Translating social ecological theory into guidelines for community health promotion. American Journal of Health Promotion, 10, 282–298. Stone, E.J., McGraw, S.A., Osganian, S.K., and Elder, J.P. (Eds.) (1994). Process evaluation in the multicenter Child and Adolescent Trial for Cardiovascular Health (CATCH). Health Education Quarterly, Suppl. 2, 1–143. Stringer, E.T. (1996). Action Research: A Handbook For Practitioners. Thousand Oaks, CA: Sage. Strull, W.M., Lo, B., and Charles, G. (1984). Do patients want to participate in medical decision making? Journal of the American Medical Association, 252, 2990–2994. Strum, S. (1997). Consultation and patient information on the Internet: The patients’ forum. British Journal of Urology, 80, 22–26. Susser, M. (1995). The tribulations of trials-intervention in communities. American Journal of Public Health, 85, 156–158. Susser, M. and Susser, E. (1996a). Choosing a future for epidemiology. I.Eras and paradigms. American Journal of Public Health, 86, 668–673. Susser, M. and Susser, E. (1996b). From black box to Chinese boxes and eco-epidemiology. American Journal of Public Health, 86, 674–677. Tandon, R. (1981). Participatory evaluation and research: Main concepts and issues. In W. Fernandes, and R.Tandon (Eds.) Participatory Research and Evaluation (pp. 15–34). New Delhi: Indian Social Institute. Thomas, S.B. and Morgan, C.H. (1991). Evaluation of community-based AIDS education and risk reduction projects in ethnic and racial minority communities. Evaluation and Program Planning , 14, 247–255. Thompson, D.C., Nunn, M.E., Thompson, R.S., and Rivara, F.P. (1996a). Effectiveness of bicycle safety helmets in preventing serious facial injury. Journal of the American Medical Association, 276, 1974–1975. Thompson, D.C., Rivara, F.P., and Thompson, R.S. (1996b). Effectiveness of bicycle safety helmets in preventing head injuries: A case-control study. Journal of the American Medical Association, 276, 1968–1973.

OCR for page 274
Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences Thompson, R.S., Taplin, S.H., McAfee, T.A., Mandelson , M.T., and Smith, A.E. (1995). Primary and secondary prevention services in clinical practice. Twenty years’ experience in development, implementation, and evaluation. Journal of the American Medical Association 273, 1130–1135. Torrance, G.W. (1976). Toward a utility theory foundation for health status index models. Health Services Research, 11, 349–369. Tversky, A. and Fox, C.R. (1995). Weighing risk and uncertainty. Psychological Review, 102, 269–283. Tversky, A. and Kahneman, D. (1988). Rational choice and the framing of decisions. In D.E.Bell, H.Raiffa, and A.Tversky (Eds.) Decision Making: Descriptive, Normative, And Prescriptive Interactions (pp. 167–192). Cambridge: Cambridge University Press. Tversky, A. and Shafir, E. (1992). The disjunction effect in choice under uncertainty. Psychological Science, 3, 305–309. U.S. Department of Health and Human Services. (1990). Smoking, Tobacco, and Cancer Program: 1985–1989 Status Report. Washington, DC: NIH Publication #90–3107. Vega, W.A. (1992). Theoretical and pragmatic implications of cultural diversity for community research. American Journal of Community Psychology, 20, 375–391. Von Winterfeldt, D. and Edwards, W. (1986). Decision Analysis and Behavioral Research. New York: Cambridge University Press. Wagner, E., Austin, B., and Von Korff, M. (1996). Organizing care for patients with chronic illness. Millbank Quarterly, 76, 511–544. Wallerstein, N. (1992). Powerlessness, empowerment, and health: implications for health promotion programs. American Journal of Health Promotion, 6, 197–205. Walsh, J.M.E. and McPhee, S.J. (1992). A systems model of clinical preventive care: An analysis of factors influencing patient and physician. Health Education Quarterly, 19, 157–175. Walter, H.J. (1989). Primary prevention of chronic disease among children: The school-based “Know Your Body Intervention Trials.” Health Education Quarterly, 16, 201– 214. Waterworth, S. and Luker, K.A. (1990). Reluctant collaborators: Do patients want to be involved in decisions concerning care? Journal of Advanced Nursing, 15, 971–976. Weisz, J.R., Weiss, B., and Donenberg, G.R. (1992). The lab versus the clinic. Effects of child and adolescent psychotherapy. American Psychologist, 47, 1578–1585. Wennberg, J.E. (1995). Shared decision making and multimedia. In L.M.Harris (Ed.) Health and the New Media: Technologist Transforming Personal And Public Health (pp. 109–126). Mahwah, NJ: Erlbaum. Wennberg, J.E. (1998). The Dartmouth Atlas Of Health Care In the United States. Hanover, NH: Trustees of Dartmouth College. Whitehead, M. (1993). The ownership of research. In J.K.Davies and M.P.Kelly (Eds.) Healthy Cities: Research and practice (pp. 83–89). New York: Routledge. Williams, D.R. and Collins, C. (1995). U.S. socioeconomic and racial differences in health: patterns and explanations. Annual Review of Sociology, 21, 349–386. Windsor, R., Baranowski, T., Clark, N., and Cutter, G. (1994). Evaluation Of Health Promotion, Health Education And Disease Prevention Programs. Mountain View, CA: Mayfield. Winkleby, M.A. (1994). The future of community-based cardiovascular disease intervention studies. American Journal of Public Health, 84, 1369–1372.

OCR for page 274
Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences Woloshin, S., Schwartz, L.M., Byram, S.J., Sox, H.C., Fischhoff, B., and Welch, H.G. (2000). Women’s understanding of the mammography screening debate. Archives of Internal Medicine, 160, 1434–1440. World Health Organization (WHO). (1986). Ottawa Charter for Health Promotion. Copenhagen: WHO. Yates, J.F. (1990). Judgment and Decision Making. Englewood Cliffs, NJ: Prentice-Hall. Yeich, S. and Levine, R. (1992). Participatory research’s contribution to a conceptualization of empowerment. Journal of Applied Social Psychology, 22, 1894–1908. Yin, R.K. (1993). Applications of case study research. Applied Social Research Methods Series, Vol. 34, Newbury Park, CA: Sage Publications. Zhu, S.H. and Anderson, N.H. (1991). Self-estimation of weight parameter in multi-attribute analysis. Organizational Behavior and Human Decision Processes, 48, 36–54. Zich, J. and Temoshok, C. (1986). Applied methodology: A primer of pitfalls and opportunities in AIDS research. In D.Feldman, and T.Johnson (Eds.) The Social Dimensions of AIDS (pp. 41–60). New York: Praeger.

OCR for page 274
Health and Behavior: The Interplay of Biological, Behavioral, and Societal Influences This page in the original is blank.