Skip to main content

Currently Skimming:

Reference Guide on Survey Research--Shari Seidman Diamond
Pages 359-424

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 359...
... Purpose and Design of the Survey, 373 A. Was the Survey Designed to Address Relevant Questions? 373 B.  Was Participation in the Design, Administration, and Interpretation of the Survey Appropriately Controlled to Ensure the Objectivity of the Survey?
From page 360...
... 417 VIII. Acknowledgment, 418 Glossary of Terms, 419 References on Survey Research, 423 360
From page 361...
... regularly rely on various forms of nonprobability sampling when conducting surveys. Consistent with Federal Rule of Evidence 703, courts generally have accepted such evidence.6 Thus, in this reference guide, both the probability sample and the nonprobability sample are discussed.
From page 362...
... a description of how the sample was drawn and an explanation for why that sample design was appropriate; (3) a report on response rate and the ability of the sample to represent the target population; and (4)
From page 363...
... A Use of Surveys in Court Fifty years ago the question of whether surveys constituted acceptable evidence still was unsettled.11 Early doubts about the admissibility of surveys centered on their use of sampling12 and their status as hearsay evidence.13 Federal Rule of Evidence 10. Lanham Act cases involving trademark infringement or deceptive advertising frequently require expedited hearings that request injunctive relief, so judges may need to be more familiar with survey methodology when considering the weight to accord a survey in these cases than when presiding over cases being submitted to a jury.
From page 364...
... . Survey research also is addressed in the Manual for Complex Litigation, Second § 21.484 (1985)
From page 365...
... The survey was offered as one way to estimate damages.25 In a Title IX suit based on allegedly discriminatory scheduling of girls' 18.  Some sample surveys are so well accepted that they even may not be recognized as surveys. For example, some U.S.
From page 366...
... 2001) ("Because the determination of whether a mark has acquired secondary meaning is primarily an empirical inquiry, survey evidence is the most direct and persuasive evidence.")
From page 367...
... As with any scientific research, the usefulness of the information obtained from a survey depends on the quality of research design. Several critical factors have emerged that have limited the value of some of these surveys: problems in defining the relevant target population and identifying an appropriate sampling frame, response rates that raise questions about the representativeness of the results, and a failure to ask questions that assess opinions on the relevant issue.
From page 368...
... As the court in United States v. Orians recognized, "The acceptance in the scientific community depends in large part on how the relevant scientific community is defined."38 In rejecting the defendants' urging that the court consider as relevant only psychophysiologists whose work is dedicated in large part to polygraph research, the court noted that Daubert "does not require the court to limit its inquiry to those individuals that base their livelihood on the acceptance of the relevant scientific theory.
From page 369...
... Supreme Court determined that the Eighth Amendment's prohibition of "cruel and unusual punishment" forbids the execution of mentally retarded persons.45 Following the interpretation advanced in Trop v. Dulles46 that "The Amendment must draw its meaning from the evolving standards of decency that mark the progress of a maturing society,"47 the Court examined a variety of sources, including legislative judgments and public opinion polls, to find that a national consensus had developed barring such executions.48 41.  See Iacono & Lykken, supra note 33, at 430, tbl.
From page 370...
... , at least 9 out of 10, juries have not imposed the death sentence."53 In Atkins, Chief Justice Rehnquist complained about the absence of jury verdict data.54 Had such data been available, however, they would have been irrelevant because a "survey" of the jurors who have served in such cases would constitute a biased sample of the public. A potential juror unwilling to impose the death penalty on a mentally retarded person would have been ineligible to serve in a capital case involving a mentally retarded defendant because the juror would not have been able to promise during voir dire that he or she would be willing to listen to the evidence and impose the death penalty if the evidence warranted it.
From page 371...
... Chief Justice Rehnquist noted two weaknesses reflected in the data presented to the Court. First, almost no information was provided about the target populations from which the samples were drawn or the methodology of sample selection and data collection.
From page 372...
... A survey is presented by a survey expert who testifies about the responses of a substantial number of individuals who have been selected according to an explicit sampling plan and asked the same set of questions by interviewers who were not told who sponsored the survey or what answers were predicted or preferred. Although parties presumably are not obliged to present a survey conducted in anticipation of litigation by a nontestifying expert if it produced unfavorable results,59 the court can and should scrutinize the method of respondent selection for any survey that is presented.
From page 373...
...  as the Survey Designed to Address Relevant Questions? W The report describing the results of a survey should include a statement describing the purpose or purposes of the survey.
From page 374...
... An early handbook for judges recommended that survey interviews be "conducted independently of the attorneys in the case."66 Some courts interpreted this to mean that any evidence of attorney participation is objectionable.67 A better interpretation is that the attorney should have no part in carrying out the survey.68 However, some attorney involvement in the survey design is necessary to ensure that relevant questions are directed to a relevant population.69 The 2009 amendments to Federal Rule of Civil Procedure 26(a)
From page 375...
... In some cases, professional experience in teaching or conducting and publishing survey research may provide the requisite background. In all cases, the expert must demonstrate an understanding of foundational, current, and best practices in survey methodology, including sampling,72 instrument design (questionnaire and interview construction)
From page 376...
... Survey methodology,75 including a. the target population, b.
From page 377...
... Did the Sampling Frame Approximate the Population? The target population consists of all the individuals or units that the researcher would like to study.
From page 378...
... If the coverage is underinclusive, the survey's value depends on the proportion of the target population that has been excluded from the sampling frame and the extent to which the excluded population is likely to respond differently from the included population. Thus, a survey of spectators and participants at running events would be sampling a sophisticated subset of those likely to purchase running shoes.
From page 379...
... subset of respondents in the survey was drawn from the appropriate sampling frame, the responses obtained from that subset can be examined, and inferences about the relevant population can be drawn based on that subset.88 If the relevant subset cannot be identified, however, an overbroad sampling frame will reduce the value of the survey.89 If the sampling frame does not include important groups in the target population, there is generally no way to know how the unrepresented members of the target population would have responded.90 84.  See American Home Prods.
From page 380...
... Identification of a survey population must be followed by selection of a sample that accurately represents that population.91 The use of probability sampling techniques maximizes both the representativeness of the survey results and the ability to assess the accuracy of estimates obtained from the survey. Probability samples range from simple random samples to complex multistage sampling designs that use stratification, clustering of population elements into various groupings, or both.
From page 381...
... All sample surveys produce estimates of population values, not exact measures of those values. Strictly speaking, the margin of error associated with the sample estimate assumes probability sampling.
From page 382...
... Although probability sample surveys often are conducted in organizational settings and are the recommended sampling approach in academic and government publications on surveys, probability sample surveys can be expensive when in-person interviews are required, the target population is dispersed widely, or members of the target population are rare. A majority of the consumer surveys conducted for Lanham Act litigation present results from nonprobability convenience samples.101 They are admitted into evidence based on the argument that nonprobability sampling is used widely in marketing research and that "results of these studies are used by major American companies in making decisions of considerable consequence."102 Nonetheless, when respondents are not selected randomly ­ from the relevant population, the expert should be prepared to justify the method used to select respondents.
From page 383...
... The difficulty is that nonresponse often is not random, so that, for example, persons who are single typically have three times the "not at home" rate in U.S. Census Bureau surveys as do family members.105 Efforts to increase response rates include making several attempts to contact potential respondents, sending advance letters,106 and providing financial or nonmonetary incentives for participating in the survey.107 The key to evaluating the effect of nonresponse in a survey is to determine as much as possible the extent to which nonrespondents differ from the respondents in the nature of the responses they would provide if they were present in the sample.
From page 384...
... 109 are desirable because they generally eliminate the need to address the issue of potential bias from nonresponse,110 such high response rates are increasingly difficult to achieve. Survey nonresponse rates have risen substantially in recent years, along with the costs of obtaining responses, and so the issue of nonresponse has attracted substantial attention from survey researchers.111 Researchers have developed a variety of approaches to adjust for nonresponse, including weighting obtained responses in proportion to known demographic characteristics of the target population, comparing the pattern of responses from early and late responders to mail surveys, or the pattern of responses from easy-to-reach and hard-to-reach responders in telephone surveys, and imputing estimated responses to nonrespondents based on known characteristics of those who have responded.
From page 385...
... If it is impractical for a survey researcher to sample randomly from the entire target population, the researcher still can apply probability sampling to some aspects of respondent selection to reduce the likelihood of biased selection. For example, in many studies the target population consists of all consumers or purchasers of a product.
From page 386...
... In a carefully executed survey, each potential respondent is questioned or measured on the attributes that determine his or her eligibility to participate in the survey. Thus, the initial questions screen potential respondents to determine if they are members of the target population of the survey (e.g., Is she at least 14 years old?
From page 387...
... Even questions that appear clear can convey unexpected meanings and ambiguities to potential respondents. For example, the question "What is the average number of days each week you have butter?
From page 388...
... 126.  See Jon A Krosnick & Stanley Presser, Questions and Questionnaire Design, in Handbook of Survey Research, supra note 1, at 294 ("No matter how closely a questionnaire follows recommendations based on best practices, it is likely to benefit from pretesting.
From page 389...
... . 132.  See infra Section VII.B for a discussion of obligations to disclose pilot work.
From page 390...
... . 134.  Howard Schuman & Stanley Presser, Questions and Answers in Attitude Surveys: Experiments on Question Form, Wording and Context 113–46 (1981)
From page 391...
... Respondents are particularly likely to be attracted to a "don't know" option when the question is difficult to understand or the respondent is not strongly motivated to carefully report an opinion.141 One solution that some survey researchers use is to provide respondents with a general instruction not to guess at the beginning of an interview, rather than supplying a "don't know" or "no opinion" option as part of the options attached to each question.142 Another approach is to eliminate the "don't know" option and to add followup questions that measure the strength of the respondent's opinion.143 C Did the Survey Use Open-Ended or Closed-Ended  Questions?
From page 392...
... . 146. This question is based on one asked in American Home Products Corp.
From page 393...
... . 154.  See, e.g., American Home Prods.
From page 394...
... 156.  Jon A Krosnick, Survey Research, 50 Ann.
From page 395...
... The order in which questions are asked on a survey and the order in which response alternatives are provided in a closed-ended question can influence the answers.160 For example, although asking a general question before a more specific question on the same topic is unlikely to affect the response to the specific question, reversing the order of the questions may influence responses to the general question. As a rule, then, surveys are less likely to be subject to order effects if the questions move from the general (e.g., "What do you recall being discussed 158.  Floyd J
From page 396...
... When respondents are shown response alternatives visually, as in mail surveys and other self-administered questionnaires or in face-to-face interviews when respondents are shown a card containing response alternatives, they are more likely to select the first choice offered (a primacy effect) .162 In contrast, when response alternatives are presented orally, as in telephone surveys, respondents are more likely to choose the last choice offered (a recency effect)
From page 397...
... would you say uses these stripes on their package? "170 The court recognized that the high percentage of respondents selecting "Mennen" from an array of brand names may have represented "merely a playback of brand share";171 that is, respondents asked to give a brand name may guess the one that is most familiar, generally the brand with the largest market share.172 Some surveys attempt to reduce the impact of preexisting impressions on respondents' answers by instructing respondents to focus solely on the stimulus as a basis for their answers.
From page 398...
... It is possible to adjust many survey designs so that causal inferences about the effect of a trademark or an allegedly deceptive commercial become clear and unambiguous. By adding one or more appropriate control groups, the survey expert can test directly the influence of the stimulus.174 In the simplest version of such a survey experiment, respondents are assigned randomly to one of two conditions.175 For example, respondents assigned to the experimental condition view an allegedly deceptive commercial, and respondents assigned to the control condition either view a commercial that does not contain the allegedly deceptive material or do not view any commercial.176 Respondents in both the experimental and control groups answer the same set of questions about the allegedly deceptive message.
From page 399...
... In addition, if respondents who viewed the allegedly deceptive commercial respond differently than respondents who viewed the control commercial, the difference cannot be merely the result of a leading question, because both groups answered the same question. The ability to evaluate the effect of the wording of a particular question makes the control group design particularly useful in assessing responses to closed-ended questions,178 which may encourage guessing or particular responses.
From page 400...
... ; Consumer American Home Prods.
From page 401...
... See Joseph L Gastwirth, Reference Guide on Survey Research, 36 Jurimetrics J
From page 402...
... Interviewer errors in following the skip patterns are therefore avoided, making CAI procedures particularly valuable when the survey involves complex branching and skip patterns.190 CAI procedures also can be used to control for order effects by having the program rotate the order in which the questions or choices are presented.191 Recent innovations in CAI procedures include audio computer-assisted selfinterviewing (ACASI) in which the respondent listens to recorded questions over the telephone or reads questions from a computer screen while listening to recorded versions of them through headphones.
From page 403...
... 2. Telephone interviews Telephone surveys offer a comparatively fast and lower-cost alternative to in-person surveys and are particularly useful when the population is large and geographically dispersed.
From page 404...
... . 198.  Random-digit dialing provides coverage of households with both listed and unlisted telephone numbers by generating numbers at random from the sampling frame of all possible telephone numbers.
From page 405...
... 3. Mail questionnaires In general, mail surveys tend to be substantially less costly than both in-person and telephone surveys.206 Response rates tend to be lower for self-administered mail surveys than for telephone or face-to-face surveys, but higher than for their Web-based equivalents.207 Procedures that raise response rates include multiple mailings, highly personalized communications, prepaid return envelopes, incentives or gratuities, assurances of confidentiality, first-class outgoing postage, and followup reminders.208 203. Additional disclosure and reporting features applicable to surveys in general are described in Section VII.B, infra.
From page 406...
... One advantage of computer-administered surveys over interviewer-administered Gullickson, Response Rates in Survey Research: A Meta-Analysis of the Effects of Monetary Gratuities, 61 J Experimental Educ.
From page 407...
... For example, if the target population consists of computer users, any bias from systematic underrepresentation is likely to be minimal. In contrast, if the target population consists of owners of television sets, a proportion of whom may not have Internet access, significant bias is more likely.
From page 408...
... , supra note 207, at 480–81 (a self-selected Web survey conducted by the National Geographic Society through its Web site attracted 50,000 responses; a comparison of the Canadian respondents with data from the Canadian General Social Survey telephone survey conducted using random-digit dialing showed marked differences on a variety of response measures)
From page 409...
... For example, a person without a landline may be reached by mail or e-mail. Similarly, response rates may be increased if members of the target population are more likely to respond to one mode of contact versus another.
From page 410...
... , interviewers must be trained to follow the pattern. Note, however, that in surveys conducted using CAPI or CATI procedures, the interviewer will be guided by the computer used to administer the questionnaire.
From page 411...
... . 224.  See, e.g., Stanley Presser et al., Survey Sponsorship, Response Rates, and Response Effects, 73 Soc.
From page 412...
... Thus, independent validation of a random sample of interviews by a third party rather than by the field service that conducted the interviews increases the trustworthiness of the survey results.227 VI. Data Entry and Grouping of Responses A
From page 413...
... The plaintiff in a trademark case229 submitted a set of proposed survey questions to the trial judge, who ruled that the survey results 228.  See, e.g., Revlon Consumer Prods.
From page 414...
... More recently, the Seventh Circuit recommended filing a motion in limine, asking the district court to determine the admissibility of a survey based on an examination of the survey questions and the results of a preliminary survey before the party undertakes the expense of conducting the actual survey. Piper Aircraft Corp.
From page 415...
... A definition of the target population and a description of the sampling frame;  3. A description of the sample design, including the method of selecting respondents, the method of interview, the number of callbacks, respondent eligibility or screening criteria and method, and other pertinent information;  4.
From page 416...
... Copies of interviewer instructions, validation results, and code books.240 Additional information to include in the survey report may depend on the nature of sampling design. For example, reported response rates along with the time each interview occurred may assist in evaluating the likelihood that non­esponse r biased the results.
From page 417...
... Because failure to extend confidentiality may bias both the willingness of potential respondents to participate in a survey and their responses, the professional standards for survey researchers generally prohibit disclosure of respondents' identities. "The use of survey results in a legal proceeding does not relieve the Survey Research Organization of its ethical obligation to maintain in confidence all Respondentidentifiable information or lessen the importance of Respondent anonymity."245 Although no surveyor–respondent privilege currently is recognized, the need for surveys and the availability of other means to examine and ensure their trustworthiness argue for deference to legitimate claims for confidentiality in order to avoid seriously compromising the ability of surveys to produce accurate information.246 242.  See Yvonne C
From page 418...
... 1982) (defendant denied access to personal identifying information about women involved in studies by the Centers for Disease Control based on Fed.
From page 419...
... coverage error. Any inconsistencies between the sampling frame and the target population.
From page 420...
... The omission of eligible population units from the sampling frame. nonprobability sample.
From page 421...
... target population. See population.
From page 422...
... Reference Manual on Scientific Evidence trade dress. A distinctive and nonfunctional design of a package or product pro tected under state unfair competition law and the federal Lanham Act § 43(a)
From page 423...
... Tanur, & Roger Tourangeau, Cognition and Survey Research (1999)


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.