Skip to main content

Currently Skimming:

12 Conducting and Disseminating Behavioral Economics Research
Pages 189-200

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 189...
... We focus on their particular relevance to behavioral economics because the results of interventions are usually nuanced and effects depend on precise wording, framing, and the definitions of the target population, which makes generalizing results especially important and challenging. Effect sizes are often small, which means that the results may be particularly subject to changes in the analytic method used and the generalizability of the results.
From page 190...
... The American Economic Review subsequently began requiring authors to make their data and computer programs publicly available, and other journals in the field followed suit; since then, many studies have been successfully replicated. The American Economic Review has recently gone further, assigning a data editor to replicate, at least in primary substance, all papers accepted for publication.
From page 191...
... In one activity, a group of researchers chose 100 lab experiments published in psychology journals and attempted to reproduce the results originally published. The results showed that only 37 percent of the replication experiments had statistically significant results, and the effect sizes were less than half those in the published papers, suggesting that the original studies had overstated the magnitude of the findings (Open Science Collaboration, 2015)
From page 192...
... Many of the studies in our review were conducted in a single geographical area, and many covered populations that are not representative of the general population in terms of race, gender, or socioeconomic status. Without tests of interventions and programs in different settings and different groups, it is difficult to know whether a particular policy tested in one setting and on one group would have the same results in a different setting and a different group.2 A related but conceptually distinct issue is whether minor variations in the treatment itself generate significant differences in findings.
From page 193...
... Nonacademic institutions that publish research also generally report funding sources because those institutions almost exclusively conduct sponsored research. For example, a substantial percentage of randomized controlled studies of nudge interventions are funded by sponsors, and this is usually acknowledged in the published report.
From page 194...
... However, addressing issues with replicability, generalizability, and publication bias would significantly increase the overall usefulness of behavioral economics research evidence. 3The most common cutoff value is approximately two, implying that the probability (p)
From page 195...
... There are still many journals that do not ask authors to provide their data and computer code for published articles, even when the data are in the public domain. Many university and for-profit academic publishers could provide more financial support for the editorial staff of professional journals to check computer code themselves.
From page 196...
... This is possible for randomized controlled trials conducted by nudge units and in cases where peer-reviewed study designs have to be reported before being carried out.6 We note that many researchers do not take into account common features of data that can bias standard errors downward, such as autocorrelation, geographic correlation, and multiple testing of hypotheses. Journal editors could require that these calculations be provided to avoid low standard errors and upward biased findings of statistical significance.
From page 197...
... Recommendation 12-1: Researchers, funders of research, university leaders, and journal editors in behavioral economics should take steps to support the replicability and generalizability of behavioral econom ics research, more fully acknowledge publication bias and take steps to detect its presence, and counter publication bias using a variety of approaches. Table 12-1 provides examples of the ways researchers, funders, journal editors, and university leaders could strengthen research in behavioral economics -- and set an example for other fields in which similar problems arise.
From page 198...
... 198 BEHAVIORAL ECONOMICS TABLE 12-1  Examples of Ways to Strengthen Research Journal University Goal Researchers Funders Editors Administrators Encourage and reward X X X the publication of null results Encourage the use of X X sample sizes sufficient to detect small effects Conduct, encourage, X X X X and reward replication of results and research transparency Commit to uncovering X X X systematic use of p-hacking measures and funnel plots in metaanalyses and designs in order to enhance transparency Set standards for X X evidence gathering and evaluation Develop a shared, X X searchable platform of studies that is maintained in perpetuity as a resource for future researchers, whether or not the studies' results are published
From page 199...
... . Publication bias in empirical sociological research: Do arbitrary significance levels distort published results?


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.