duction that is not statistically significant, the investigators will not be motivated to submit the results for publication or, if they do submit them, journal editors will consider such “negative studies” to be of low priority. Those considerations do not invalidate the published studies, but they suggest that a meta-analysis or quantitative estimate based on the published studies might overestimate the effects of smoking bans. The committee tried to identify and seek the results of all studies of the effects of smoking bans on the incidence of cardiovascular disease events. It searched CRISP and ClinicalTrials.gov to determine whether other studies of the effects of smoking bans on acute coronary events had been funded or approved and never published, and it found none. The National Association of City and County Health Officials Web site was also searched to determine whether other studies had been initiated, and the committee requested information from the Centers for Disease Control and Prevention and AHA on other studies that were under way or had been conducted and never published; no such studies were identified. There is still the possibility that studies showing no association were conducted but not published; this would bias the data toward there being an association between secondhand-smoke exposure or smoking bans and acute coronary events.
The 11 studies reviewed in this chapter show remarkable consistency: all were observational studies that used different analyses and showed decreases in the rate of acute MI after implementation of eight smoking bans. Those decreases ranged from about 6 to 47%, depending on the study and the analysis. That consistency in the direction of change gave the committee confidence that smoking bans result in a real decrease in the rate of acute MIs.
Apart from their consistency, most studies drew conclusions that appear to be stronger than the data and analyses warranted. Some researchers have combined the results of the studies with meta-analytic methods to provide a point estimate of the decrease and an associated standard error (Glantz, 2008; Richiardi et al., 2009). The committee concluded that there are too many differences among the studies to have confidence in such a point estimate based on combining results of the different studies.
First, the nature of the “treatment”—the smoking ban and collateral programs—is far from clear in specific studies, so there may not be a common intervention to assess. Any form of causal analysis needs to be explicit about the details of the intervention and the fidelity with which it was implemented. In addition, some of the studies tested different “treatments” as part of their hypotheses: some looked simply at the effect of smoking bans, others looked more directly at changes in secondhand-smoke exposure.