channels (churches, community groups) to reach out to the African-American community.
Examination of this evidence might begin with simple crossgroup comparisons with regard to achieved reach overall and for each of the specific channels used. Comparison would begin with evidence about differential access to channels, but would include evidence of recall of messages transmitted over each channel, and perhaps reports that the exposure to the message produced subsequent conversation. As the first test described under the approaches to evaluating outcome effects of programs, this approach would provide useful information about what exposure was being achieved across diversity subgroups for a particular campaign. However, in isolation it would not indicate whether a different diversity channel strategy would improve relative exposure among groups.
Comparisons of diversity strategies might rely on the more elaborate comparison tests parallel to those described for the outcome analyses. They would be used with a focus on recalled exposure rather than effects on outcomes.
There is an easy argument that subgroups will respond differently to messages with varying sources or with varying styles or images, even if the message strategy does not vary. On the other hand, every new execution adds to the cost of the campaign, and if it is meant to complement a differential channel purchase strategy, those additional costs can be substantial. The issue is not whether such varied executions would be helpful, but rather how much of such executional variation should be done, and the extent to which the advantage counterweighs the cost. What evidence might be useful to support targeting execution strategies rather than using a common-denominator execution strategy (e.g., having ads that feature boys and ads that feature girls versus ads that feature both)? Logically, the same sort of evidence that will be persuasive for