Fulkerson, and Park, 2000; Jones-Webb et al., 1997; Schwartz, Farrow, Banks, and Giesel, 1998; Wagenaar et al., 1993). In addition, the national surveys of college student drinking find that a large percentage of college youth report they do not have to pay anything for alcohol, presumably because they are at a party where someone else is supplying the alcohol (Wechsler, Kuo, Lee, and Dowdall, 2000).
The importance of alcohol supply to youth has been studied by examining the effects of alcohol advertising on youth decisions about drinking (Atkin, Eadie, Leather, McNeill, and Scott, 1988; Austin and Nach-Ferguson, 1995; Grube and Wallack, 1994). There is evidence of the effects of alcohol promotion on youth. In addition, such issues as legal age for purchase of alcohol and the enforcement of underage sales of alcohol have been shown to affect youth access to retail supply (Grube, 1997; Wagenaar and Toomey, 2002). In addition, alcohol prices have an important effect on youth drinking (see Cook and Moore, 2001).
Solid empirical evidence should form the basis for the decisions that communities, agencies, and individuals make about how to reduce availability of alcohol to youth. At a minimum, this evidence should (1) provide substantial indication of effectiveness in reducing access and ideally youth drinking, and (2) should be based on methodologically strong research. In other words, good ideas are not sufficient, regardless of their logical, intuitive, or popular appeal.
First, the evaluation of the prevention strategy must show that it affects reliable and valid measures of youth drinking or risk factors clearly shown to increase risk of drinking. Second, this effect must be demonstrated using designs that allow the research to rule out competing explanations and the utilization of appropriate statistical analyses in comparison to an appropriate comparison or control group or condition. Third, an appropriate research design should be used. Evaluating investigator-designed prevention programs can utilize randomized controlled trials, which can be used to rule out competing hypotheses. With many policy interventions, however, it is impractical, inappropriate, or not possible to undertake random assignment. With interventions such as alcohol taxation and drunk driving laws, the political process, not random assignment, determines whether individuals in a certain jurisdiction on a given date are subject to high or low taxes, tough or lenient drunk driving laws, and so on. Such studies have been called quasi-experimental designs (Cook and Campbell, 1979) to describe designs that do not employ random assignment to experimental or control conditions. Scientists also use the term natural experiments or “experiments without random assignment.” Alternative designs should be em-