Skip to main content

Currently Skimming:

6 Generalizability of Benefit-Cost Analyses
Pages 47-53

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 47...
... Mark Lipsey discussed the potential value of meta-analysis for this purpose, and Howard Bloom examined some broad design and analysis considerations. META-ANALYSIS Lipsey began by suggesting that, when research findings can be generalized, it means that the same intervention will produce the same or nearly the same effect despite variation on some dimensions, such as the characteristics of the providers or recipients of the intervention, the set ting, and perhaps certain nonessential features of the intervention itself.
From page 48...
... A related problem is that, even with any reasonably concise definition of a particular intervention, variability abounds. A statistical test used in meta-analysis, the Q test, is a tool for answering the question of whether or not the between-study variation on the effect sizes for a given outcome is greater than one would expect from the within-study sampling error.
From page 49...
... Lipsey explained that the methodological variability not only results from differences in design associated with randomization, but also stems from variation in the way outcomes are operationalized. Because there is no settled way of measuring, for exam ple, the noncognitive outcomes of pre-K programs, researchers have been creative, using parent reports, teacher reports, observations, or various standardized scales.
From page 50...
... What can be done about these difficulties? One approach is response surface or, more specifically, effect size surface modeling, an approach for statistically modeling the relationships between intervention effects and key explanatory variables.1 The response surface is defined by the multiple dimensions of interest along which effects -- such as subject characteristics, intervention characteristics, settings, methodology, and the like -- vary.
From page 51...
... Practitioners treat individuals, policy makers target defined groups, and researchers study averages and patterns of variation -- yet all need to learn from the same sources of information. That dilemma trans lates into questions about how to test multiple hypotheses and identify statistically significant findings.
From page 52...
... They used a two-level hierarchical model of cross-site variation in experimental estimates of program effects; data covered 59 program offices in 8 states and more than 69,000 participants and included administrative records, participant surveys, and office staff surveys. The programs studied provided basic education, assistance with job searches, and vocational training.
From page 53...
... Perhaps more important, however, is success with a research model that makes use of preplanned subgroup analysis as well as common measures and protocols across studies. Others agreed, suggesting that if some modest core measures for critical outcomes and variables could be established for common use, it would greatly facilitate the work of meta-analysis.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.