National Academies Press: OpenBook

Estimating the Incidence of Rape and Sexual Assault (2014)

Chapter: 8 Potential Sources of Error: Nonresponse, Specification, and Measurement

« Previous: 7 Potential Sources of Error in the NCVS: Sampling, Frame, and Processing
Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×

8


Potential Sources of Error:
Nonresponse, Specification,
and Measurement

This chapter continues the analysis in Chapter 7 of potential sources of error in the National Crime Victimization Survey (NCVS), covering nonresponse error, specification error, and measurement error.

NONRESPONSE ERROR

Nonresponse error in surveys arises from the inability to obtain a useful response to all survey items from the entire sample. A critical concern is when that nonresponse leads to biased estimates. Nonresponse bias is a product of the difference between respondents and nonrespondents on a particular measure and the size of the nonresponse population. A lower response rate increases the potential for greater nonresponse bias, but when the data are missing at random, a lower response rate will neither create nor increase nonresponse error.

The NCVS, like most federal household surveys, is voluntary and not required by law. The challenges facing today’s federal household surveys were recently summarized by the National Research Council (2013a, p. 68):

[They] include maintaining adequate response from increasingly busy and reluctant respondents. More and more households are non-English speaking, and a growing number of higher income households have controlled-access residences….Today’s household surveys face confidentiality and privacy concerns, a public growing more suspicious of its government, and competition from an increasing number of private as well as government surveys vying for the public’s attention.

Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×

These challenges mean that maintaining a high level of response on a large voluntary national survey is difficult. This section examines the nonresponse profile of the NCVS, looking at both the level of nonresponse and its potential effect on the measured rate of sexual victimization.

Nonresponse can arise at several points in the process of sample recruitment. In the NCVS, the household address is selected, and then each household member (12 years of age and older) is asked to complete the survey. Nonresponse on a questionnaire (unit nonresponse) can occur at two stages. Household nonresponse occurs when no one living at the selected housing unit responds in the data collection wave. Person-level nonresponse occurs when some eligible persons in the household respond and some do not respond. In addition, a household or person may respond on some waves but not on all waves. In the NCVS, a household responding to at least one wave of the NCVS is counted as a household respondent for the survey. Likewise, a person who is interviewed in one or more waves is called a person respondent. Finally, item nonresponse (as opposed to unit nonresponse) can also occur for person respondents when some questions on the questionnaire were not completed that should have been completed.

This section looks in more depth at the person-level nonresponse at both the unit and item level.

Unit-Level Nonresponse

The NCVS has maintained a moderately high level of survey (unit) response at both the household level and the person level (see Table 8-1). In 2011, 79,800 households participated in the NCVS, representing a 90 percent household response rate for the year.1 The person-level response rate (most important for victimization rates) was 88 percent in 2011. Response rates have decreased several percentage points over the decade, but not substantially (see Table 4-2 in Chapter 4). These response rates are consistent with several other important federal household surveys in 2011.2

Nonresponse in a survey may be “missing at random” (MAR), mean-

____________

1It appears that the Census Bureau is defining the housing unit response rate as the number of housing units that participated at one or more waves during the year divided by the number that should have participated during the year. This is an inflated number because if a housing unit participated in January but not in July (or vice versa), then it is still counted as a respondent for the year. A better measure of the response rate is the number of times housing units participated divided by the number of times housing units were eligible to participate. We believe this response rate calculation is a better indicator of the potential for (or risk of) nonresponse bias than the current way the response rate is calculated.

2For comparison, in 2010 the Current Population Survey had monthly household response rates of 91-93 percent; the American Community Survey had a household response rate of 98 percent; the National Health Interview Survey had a household response rate of 82 percent; and the Consumer Expenditure Survey had a household response rate of 73 percent.

Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×

TABLE 8-1 National Crime Victimization Survey Response Rates for Households and Individuals

Household Level Person Level
Year Responding Response Rate Total Persons Responding Persons Response Rate
1996 45,000 93 NA 85,330 91
1997 42,910 95 NA 79,470 90
1998 43,000 94 NA 78,900 89
1999 43,000 93 204,915 77,750 89
2000 43,000 93 207,800 79,710 90
2001 44,000 93 208,598 79,950 89
2002 42,000 92 203,061 76,050 87
2003 42,000 92 201,388 74,520 86
2004 42,000 91 202,771 74,500 86
2005 38,600 91 181,009 67,000 84
2006 38,000 91 179,717 67,650 86
2007 41,000 90 170,869 73,650 86
2008 42,093 90 155,704 77,852 86
2009 38,728 92 157,796 68,665 87
2010 40,974 92 NA 73,283 88
2011 79,800 90 NA 143,120 88

SOURCES: Data from Bureau of Justice Statistics (1997, 1998, 1999, 2000, 2001, 2002a, 2003, 2004, 2005, 2006, 2007, 2008a, 2009, 2010, 2011, 2012).

ing that the decision not to respond on the survey is unrelated to key study outcome measures, such as crime victimization, and that reweighting of the responding units may suffice to adjust for the missing data. The presence of this type of nonresponse, when appropriately reweighted, does not cause a bias, but does reduce sample size and increase sampling error. Other unit nonresponse is judged to be “not missing at random” (NMAR) and thus is more of a problem because it can produce bias in the estimates as well as increase the sampling error. If the nonresponse varies with key outcome measures and their covariates (such as race, income, or geographic area), then the nonresponse may be MAR within groups formed based on these covariates. In this case, reweighting might be done within the groups, thus reducing potential nonresponse bias.

Because of the panel nature of the NCVS, considerable information is known about the demographics of selected households and individual household members if they respond at least once over the 3-year life of the panel. The Bureau of Justice Statistics (BJS) uses this information for much more than just adjusting for nonresponse. For example, one adjust-

Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×

TABLE 8-2 Survey-Level Nonresponse on the National Crime Victimization Survey Judged to Be Missing at Random (MAR), by Subgroups

Subgroup

Percentage of Nonresponses Judged MAR

Total Counts of Survey-Level Nonrespondents Judged MAR

All

81.10

2,762

Male

84.04

1,327

Female

83.43

1,435

Black

84.81

   469

Other

80.43

2,294

25 years of age and younger

84.11

   323

25 years of age and older

83.74

2,441

SOURCE: NORC at the University of Chicago (2009, p. 19, Table 2.5).

ment is to “inflate sample point estimates to known population totals to compensate for survey nonresponse and other aspects of the sample design” (Bureau of Justice Statistics, 2008b, p. 12). (See Chapter 4 for more detail on this and other adjustments.)

The success of the BJS adjustment processes in addressing potential unit-level nonresponse bias in the NCVS was examined by NORC at the University of Chicago (2009) in an extensive study with several parts. In one part, NORC conducted a capture-recapture analysis across panel waves to obtain relative counts of different categories of nonrespondents. This technique separates the chronic nonresponders across the 3 years from the occasional and frequent responders, hypothesizing that the chronic nonresponders were potentially NMAR. Based on the above assumption as to which respondents were NMAR, the NORC report estimates that 81 percent of the nonrespondents are not chronic nonresponders and may be assumed to be MAR (see Table 8-2). Using the term “ignorable” for MAR and nonignorable for NMAR nonresponse, the report (NORC at the University of Chicago, 2009, p. 16) concludes:

Overall, more that 80 percent of the nonresponses in NCVS can be regarded as “ignorable.” Proportionately, more nonresponses by male, black, and young (age 25 or less) eligible interviewees are ignorable. The largest of variation occur for the race/ethnicity, with eligible black interviewees having proportionately more ignorable nonresponses (84.81% vs. 80.43%).

NORC points out that its techniques did not allow analysis of nonresponse in the first round of the panel.

In a subsequent part of the report, NORC developed log linear models

Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×

to predict response disposition for key subgroups. The models examined “easy versus hard” responder characteristics. Finally, NORC made county-level comparisons between the statistics from the Uniform Crime Reports and the NCVS pooled across year. The report’s conclusion (NORC at the University of Chicago, 2009, p. 47) is “little evidence for nonresponse bias after the first round of the survey…. The within unit nonresponse is weight adjusted to age and race controls in the NCVS and these seem to be the categories that are the main drivers in any potential nonresponse bias.”

The panel has important reservations about some of the NORC analysis and conclusions. The capture-recapture analysis is based on the assumption that individuals who respond at least once but not routinely on the NCVS are MAR. This assumption appears to go untested and yet underpins NORC’s overall analysis. Another limitation is that the logistic modeling techniques used in the study only looked at a few standard demographic characteristics. Finally, it is unclear whether this broad look at nonresponse on the NCVS paints the same picture as would an analysis of the subpopulations that are at greater risk for sexual violence.

CONCLUSION 8-1 The overall unit response rates, as calculated, on the National Crime Victimization Survey are moderately high and have been reasonably stable over the past 10 years. Although an independent analysis concluded that the methods that the Bureau of Justice Statistics uses to adjust for nonresponse appear to provide a satisfactory correction for nonresponse bias at the unit level, our panel has reservations about that analysis and remains concerned that there may be a nonresponse bias related to sexual victimization.

Panel Attrition

Panel attrition is a response pattern in surveys with multiple waves of data collection in which a respondent’s propensity to respond decreases over these waves. Because the NCVS is a panel survey with seven waves of data collection over 3 years, it is important to examine the nonresponse pattern across waves. There are many reasons that an individual may attrite, including deciding to quit reporting, not being available during the data collection period, or moving to a different address.

BJS does not provide NCVS response rates by wave. To get some sense of attrition rates, the panel calculated unweighted response rates (at the person level)3 using data for 2007-2008 by time in sample (see Figure 8-1).

____________

3The first wave person-level response rate is the proportion of persons participating at first wave among sampled, eligible persons at first wave. The person-level attrition rate at wave t >1 is the proportion of persons who participated at first wave who also participated at wave t.

Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×

image

FIGURE 8-1 National Crime Victimization Survey person-level attrition rates (unweighted) for the period 2007-2008.
NOTES: Time in sample (TIS) 1 is the response rate at the initial wave, TIS 2-7 is the response rate given response at TIS 1, and ALL is the proportion of eligible person responding to all seven waves.
SOURCE: Data from National Crime Victimization Survey, 2007-2008.

These attrition rates were calculated at the person level using linked longitudinal files. One can see substantial attrition in response rates over time, with less than half the sample responding in all waves.

The NORC at the University of Chicago (2009) report provides insight into this panel attrition by subgroups. The report’s analysis is based on the total number of waves in which a respondent participated, without an ordering of those waves over time. Looking at the age of the respondents, the analysis found that younger respondents participated in fewer waves than did older respondents (see Figure 8-2).4 Approximately 15 percent of

____________

4Data included only individuals who had participated in the first wave.

Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×

image

FIGURE 8-2 Participation in National Crime Victimization Survey waves by respondents’ ages, 2005-2006.
SOURCE: NORC at the University of Chicago (2009, p. 29, Chart 3.3).

respondents 25 years of age and younger participated in all seven waves; in contrast, approximately 45 percent of respondents 55 years of age and older did so. And as can be seen in the figure, nearly 30 percent of respondents 25 years of age and younger did not participate after the first wave.

The NORC at the University of Chicago (2009) report also looks at response by household structure (see Figure 8-3). Individuals living as couples (couple only, couple with kids and family, couple with others) responded in more waves than did individuals who were not identified as being part of a couple (male with relatives, male with others, female with relatives, female with others).

The results shown in both of these figures provide particular concern for the estimation of rape and sexual assault because the low responders—particularly young people and females who are not part of a couple—appear to be more at risk for being victims of those crimes. In a multivariate analysis of subgroup risk among females for rape and sexual assault (Lauritsen, 2012), younger people (in the age groups 12 to 17, 18 to 34, and 35 to 49) have a higher odds ratio than do older (50+) individuals (see Table 8-3). Females who are not part of a couple (widowed, divorced, separated, and

Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×

image

FIGURE 8-3 Participation in National Crime Victimization Survey waves by family structure, 2002-2006.
SOURCE: NORC at the University of Chicago (2009, p. 28, Chart 3.2).

never married) have a higher odds ratio than do married women. Planty et al. (2013) provide similar results.

Thus, attrition rates are higher in several subgroups that appear to be at higher risk for sexual violence. It is unclear whether this is a related effect. One could argue that someone who has been sexually victimized may be

TABLE 8-3 Risk for Rape and Sexual Assault for Females, by Age and Marital Status, National Crime Victimization Survey, 1994-2009

Odds Ratio 95% Confidence Interval Significance
Age (in comparison with 50+)
   35 to 49 4.6 [3.31, 6.38] *
   18 to 34 8.7 [6.21, 12.22] *
   12 to 17 9.23 [6.31, 13.40] *
Marital status (in comparison with married)
   Widowed 2.48 [1.43, 4.29]
   Divorced 5.56 [4.44, 6.96] *
   Separated 10.51 [7.89, 14.00] *
   Never married 3.90 [3.12, 4.87] *

*The odds ratios for rape and sexual assault are significantly greater than the odds ratios for other forms of serious violence.
SOURCE: Lauritsen (2012, Table 6).

Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×

less willing to respond on the next NCVS, knowing that questions regarding victimization will be asked. Similarly, one could argue that someone who has been sexually victimized may be more likely to move to a safer neighborhood, and thus no longer be an eligible respondent. The panel did not find data that could answer this question definitively, but there appears to be potential for a nonresponse bias that could contribute to underreporting of these victimizations.

CONCLUSION 8-2 There appears to be notable panel attrition over the 3 years in the National Crime Victimization Survey (NCVS). This attrition is particularly problematic for estimating rape and sexual assault because some people at greater risk for being victimized by these crimes—young people and females not living as part of a couple—are also some of those most likely to drop out before the seven waves of the NCVS have been completed.

CONCLUSION 8-3 Although the Bureau of Justice Statistics publishes annual response rates for the National Crime Victimization Survey (NCVS), the published data do not include important details of response, such as mode of data collection and attrition rate. Such details are needed by data users for a thorough assessment of the quality of NCVS estimates.

Item Nonresponse

Item nonresponse occurs when a respondent completes a substantial portion of a questionnaire (enough to count the interview as “complete”) but does not provide answers to certain key items. The panel could not find an analysis of item nonresponse on the NCVS in general, nor one specifically for the questionnaire items regarding rape and sexual assault. Without such analysis, the panel relied on its collective experience and judgment about item response for key questions regarding sexual victimization.

There is considerable evidence in survey research that respondents are reluctant to answer socially undesirable questions (Bradburn, 1983; Schaeffer, 2000; Tourangeau and Smith, 1996; Tourangeau and Yan, 2007). (See also the section on “Questionnaire” in this chapter.) The panel thinks that item “refusals” on these particular socially undesirable questions would be difficult to identity. If a respondent does not want to report a rape or sexual assault, or to talk about such an assault, then he or she is more likely to answer NO to the appropriate screening questions (he or she was not victimized) rather than more directly refusing to answer the question. In fact the screening questionnaire (Bureau of Justice Statistics, n.d.-d) has only check boxes for YES or NO for these questions, and no response box

Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×

for “refused” or “don’t know.” Thus, these item refusals are most likely disguised as legitimate zeros (there was no victimization).

Panel surveys may create an additional nuance regarding item nonresponse. After going through one or more waves of the survey, a respondent learns that answering YES to a screening question will lead to a range of additional questions regarding the specific incident. Surveys with this repeated pattern, and especially those with the pattern repeated across multiple waves, are subject to “satisficing”: a respondent provides an answer (perhaps NO to a screening question) that moves the interviewer on to the next question, without necessarily being an accurate or complete response. This respondent conduct is hard to detect and measure, but the panel thinks it is likely that satisficing is occurring on the NCVS.

CONCLUSION 8-4 The panel believes it is likely that item refusals on questions about sexual victimization on the National Crime Victimization Survey may be recorded as if they were “no” response rather than item nonresponse when a respondent does not want to report a victimization. Another possibility is for a respondent to sometimes answer “no” on screening questions simply to avoid additional questions in the survey.

SPECIFICATION ERROR

For any survey, its intended purpose and concepts must be clearly defined in order for survey instruments and procedures to accurately translate those concepts into the collection of data. In surveys, specification error may occur when there is a mismatch between what the survey is measuring and what it is intended to measure.5 As defined by Biemer (2010, p. 31): “specification error pertains specifically to the problem of measuring the wrong concept in a survey, rather than measuring the right concept poorly.” This section examines a key concept associated with the NCVS to see if it is clearly defined and consistent between the survey’s purposes and processes.

This key concept is to identify if and when a respondent has been the victim of a rape or sexual assault. BJS has developed a clear definition of what the survey is intended to measure (see Box 8-1). In the omnibus screener that is currently used in the NVCS, the deliberate approach is to soften the link between the screening cues and any particular type of criminal victimization. In particular, for rape and sexual assault, as BJS translates

____________

5This definition is different from that used by economists and other mathematical modelers, for whom “specification error” refers to an incorrect statement of an empirical model. We use the term differently in the report.

Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×

BOX 8-1
Definitions of Rape and Sexual Assault Used on
the National Crime Victimization Survey

Rape—Forced sexual intercourse including both psychological coercion as well as physical force. Forced sexual intercourse means vaginal, anal, or oral penetration by the offender(s). This category also includes incidents where the penetration is from a foreign object such as a bottle. Includes attempted rapes, male as well as female victims, and both heterosexual and homosexual rape. Attempted rape includes verbal threats of rape.

Sexual Assault—A wide range of victimizations, separate from rape or attempted rape. These crimes include attacks or attempted attacks generally involving unwanted sexual contact between victim and offender. Sexual assaults may or may not involve force and include such things as grabbing or fondling. Sexual assault also includes verbal threats.

SOURCE: Bureau of Justice Statistics (n.d.-b).

these specific concepts into data collection, the respondent is asked the following question (Bureau of Justice Statistics, n.d.-d):

Has anyone attacked or threatened you in any of these ways:

•   With any weapon, for instance, a gun or knife,

•   With anything like a baseball bat, frying pan, scissors, or stick,

•   By something thrown, such as a rock or bottle,

•   Include any grabbing, punching or choking,

•   Any rape, attempted rape or other type of sexual attack, [emphasis added]

•   Any face to face threats,

Or

•   Any attack or threat or use of force by anyone at all. Please mention it even if you are not certain it was a crime?

The respondent also is asked a special follow-up question that focuses on how well the respondent knew the offender:

Incidents involving forced or unwanted sexual acts are often difficult to talk about. (Other than any incidents already men-

Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×

tioned,) have you been forced or coerced to engage in unwanted sexual activity by:

   (a)  Someone you didn’t know before

   (b)  A casual acquaintance

   (c)  Someone you know well?

The concept that BJS is trying to measure is clearly specified by the definitions it publishes. Unfortunately, these complex, multifaceted definitions are translated into a few simple words in the questionnaire: rape, attempted rape, other type of sexual attack, unwanted sexual activity. The cue screening approach may be effective for many common victimizations on the omnibus NCVS, but these words do not convey the complexity of the intended concepts nor capture the components of the BJS definitions of rape and sexual assault. (The section below on measurement error further discusses how respondents may misinterpret the words in these questions.)

CONCLUSION 8-5 There is serious specification error in the National Crime Victimization Survey measurement of rape and sexual assault. Although the Bureau of Justice Statistics has developed clear definitions of the concepts, they are replaced in the omnibus screener by ambiguous wording that does not convey the multifaceted concepts to respondents.

MEASUREMENT ERROR

Measurement error includes a large family of errors that may occur when response on a survey results in the collection of inaccurate or incomplete information. In this section, the report discusses potential measurement errors on the NCVS associated with the respondent, the questionnaire, the mode of collection, and with the interviewer/respondent interaction. These issues are interrelated, and each has the potential to result in measurement error on the NCVS.

Respondents

Survey research has mapped a respondent’s cognitive process in answering survey questions (Schwarz, 1996; Strack and Martin, 1987; Tourangeau, 1984; Tourangeau, Rips, and Rasinski, 2000). In particular, Tourangeau, describes four steps a respondent goes through in responding to a survey question:

Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×

1.   comprehending the question and instructions;

2.   retrieving specific memories or information;

3.   making judgments, that is, regarding the matching of the information to the question, and the completeness of that information; and

4.   formulating a response.

In the final step of formulating a response, Cannell, Miller, and Oksenberg (1981) point out that the respondent evaluates potential responses based not only on whether he or she judges a potential response to be accurate, but also on other factors he or she views as important. This observation is important in assessing response to a sensitive question.

On the NCVS, a respondent may not comprehend a critical question in the same way that BJS intends.6 For example, on the screening questionnaire (Bureau of Justice Statistics, n.d.-d), a respondent is asked:

Has anyone attacked or threatened you in any of these ways:

•   With any weapon, for instance, a gun or knife,

•   With anything like a baseball bat, frying pan, scissors, or stick,

•   By something thrown, such as a rock or bottle,

•   Include any grabbing, punching or choking,

•   Any rape, attempted rape or other type of sexual attack,

•   Any face to face threats,

    Or

•   Any attack or threat or use of force by anyone at all. Please mention it even if you are not certain it was a crime?

In the “comprehension” process (see above), the respondent may focus on the listing of various weapons used to threaten and not understand that BJS would allow a yes response even if none of these types of threats were made. An example would be an incident of “date rape” involving alcohol use and being held down, but in which no weapon was used. Thus, the respondent might comprehend the question incorrectly, recall an incident from memory, make a judgment that the incident does not fit the criteria in the survey question, and respond no.

This screening question asks specifically about rape and attempted rape. The panel believes these terms are ambiguous for many respon-

____________

6A similar example is discussed earlier in this chapter in the section on specification error: the panel has concluded that the concept of rape and sexual assault is misspecified in the data collection instruments of the NCVS. This section provides more discussion as to why a respondent may not understand these items in the way the BJS intends.

Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×

dents and many situations. This issue is discussed earlier in the chapter under “Specification Error” and is further discussed below in the section “Questionnaire.”

Alternatively, the respondent may clearly understand the question, recall the memory of being raped at knifepoint, make a judgment that the incident is relevant to the question being asked, and yet decide not to disclose the incident. She or he formulates the no answer to the screening question based on the “other factors” (see Cannell, Miller, and Oksenberg, 1981; also see section on Item Nonresponse earlier in this chapter). As Rasinski (2012, p. 3) notes

[W]hen the events are “sensitive” additional considerations for protecting the respondent’s privacy, preserving the respondent’s self-image and assuring the respondent that they will not suffer physical or psychological harm because of their disclosure must also be put into place.

See also Schaeffer (2000); Sudman and Bradburn (1982); and Tourangeau, Rips, and Rasinski (2000).

Questionnaire

The wording of questions is critically important to assist a respondent in comprehending the survey designer’s intended meaning. Rasinski (2012) points out that in developing effective questions to solicit information about sexual victimizations, one must consider both the methodological aspects of designing sensitive questions and the specific nuances of talking about rape and sexual assault. There are several different aspects of a question that could make it sensitive—social undesirability, invasion of privacy, and risk of disclosure (to third parties) (Tourangeau, Rips, and Rasinski, 2000). A question that asks a respondent about experiencing sexual violence incorporates all three aspects of sensitivity.

Previous parts of this report discuss the issues of miscommunication when using such words as rape, force, and consent. Tracy et al. (2012, p. 3) explain this issue in a broad context:

This historical context influences the way sex crime laws are written by lawmakers and enforced by law enforcement, and, in cases arising under those laws, how police decide whether to arrest, how prosecutors decide whether to take the cases to court, and how judges and juries make ultimate decisions as to whether to convict. The system’s response in turn impacts whether victims perceive themselves as crime victims and whether they view the criminal justice system as one that recognizes them as crime victims. One consequence of the system’s negative impact on victims is that it reduces victim reporting to and cooperation with police. Understanding this background will help in developing both survey instructions and

Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×

questions that are more effective at capturing the prevalence and incidence of rape and sexual assault. It will also assist in understanding that the data collected may be limited to the extent that victim reporting—even to surveys—may be impeded by inaccurate societal notions about rape and sexual assault.

The NCVS is an omnibus crime survey, and the current screener uses cues that deliberately soften the link between the questions and any specific type of criminal victimization, and focuses instead on asking about such things as weapons and location. The panel has two major concerns about how the NCVS currently asks questions about rape and sexual assault. First, the questionnaire uses terms, such as rape, that do not always have consistent meaning and that do not clearly articulate the scope of actions that are included in the definition of rape. Second, the questions are embedded in a criminal context that may impede accurate reporting.

The NCVS uses the word rape, as in

Has anyone attacked or threatened you in any of these ways:
(e) Any rape, attempted rape or other type of sexual assault

The NCVS screener follows this with two other general cues regarding “incidents committed by someone you know” and “incidents involving forced or unwanted sexual acts.” (See Chapter 4 for more details.) Taken together, these cues are meant to assist a respondent in recalling a rape or sexual assault.

As described in detail in Chapters 2, 3, and 4, the different legal statutes throughout the United States, the Federal Bureau of Investigation, and the BJS, each uses a different definition for the word “rape.” It is not reasonable to assume that individual respondents will all interpret this word (or the term “sexual acts”) identically. Important other surveys (described in Chapter 5) have taken a different approach. Although their questionnaires are not identical, they have used questions that more clearly describe specific “behaviors” that an offender may have exhibited. When a respondent is asked whether someone engaged in a very specific action in the incident, there is considerably less chance for miscommunication. These types of questions are referred to as “behaviorally specific” because they explicitly describe a set of behaviors. For example, on the National Violence Against Women Study, respondents were asked

Has a man or boy ever made you have sex by using force or threatening to harm you or someone close to you? Just so there is no mistake by sex, we mean putting a penis in your vagina.

Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×

This question describes a specific action (“putting a penis in your vagina”), which is more likely to be clearly understood than asking a respondent if he or she has been raped. This approach was reinforced in a recent discussion of research methods for measuring rape and sexual assault (Jaquier, Johnson, and Fisher, 2011, p. 27):

The usefulness of behaviorally specific questions cannot be overemphasized, not necessarily because they produce larger estimates of rape, but because they use words and phrases that describe to the respondent exactly what behavior is being measured. Using behaviorally specific screen questions appears to cue more women to recall their experiences.

Most of the studies that use behaviorally specific questions have measured a higher rate of incidence of sexual violence (Fisher, 2009), and it is the panel’s judgment that the use of behaviorally specific questions improves communication with the respondent and facilitates more consistent responses.

CONCLUSION 8-6 Words, such as “rape” and “sexual assault,” on the National Crime Victimization Survey may not be consistently understood by survey respondents. Other surveys have used more behaviorally specific words to describe a specific set of actions. More specific wording of questions would be understood more consistently by all respondents and thus lead to more complete and accurate answers.

The NCVS is a criminal victimization survey. It is introduced that way to household members. Once an interview begins, the questionnaire goes through a listing of crimes, asking each respondent if he or she has been the victim of any of them. When asked questions about rape and sexual assault, it is clear that the interviewer is asking about a crime. In fact, the questions about rape and sexual assault are embedded among questions that are dominated by other crimes. For example, as noted above, the following question is dominated by the descriptions of weapons and assaults.7 Rape and sexual assault, particularly when no weapon is involved, may appear to be less central to the line of inquiry than other forms of assault in this list (Bureau of Justice Statistics, n.d.-d).

____________

7The context and surrounding questions in a questionnaire may greatly affect responses on a survey. This was illustrated by Gibson et al. (1978, p. 251) in an experiment that added a series of attitude questions about crime to the National Crime Survey (NCS). They found that inclusion of the attitude supplement to the NCS had “a statistically significant and substantial impact on the victimization rates obtained.”

Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×

Has anyone attacked or threatened you in any of these ways:

•   With any weapon, for instance, a gun or knife,

•   With anything like a baseball bat, frying pan, scissors, or stick,

•   By something thrown, such as a rock or bottle,

•   Include any grabbing, punching or choking,

•   Any rape, attempted rape or other type of sexual attack,

•   Any face to face threats,

    OR

•   Any attack or threat or use of force by anyone at all. Please mention it even if you are not certain it was a crime?

Most sexual violence is committed by someone known to the victim. The victim may not have contacted the police (it is estimated that between 65 and 80 percent of such violent incidents are not reported to police) and may not think of the incident as a crime. The respondent may also think that because she or he did not contact the police about the incident, it should not be reported on a government crime inquiry. A respondent may fail to respond for these reasons even though the current NCVS screener has a cue reminding that “people often do not think of incidents committed by someone they know.” Alternatively, the respondent may understand that the sexual victimization was criminal but may fear reprisal or may not want to get the other person “in trouble.” Thus, the respondent may have reservations about answering questions about criminal incidents and the risk of disclosure to police.

CONCLUSION 8-7 Questions about incidents of rape and sexual assault in the National Crime Victimization Survey are asked in the context of a criminal victimization survey and embedded within individual questions that describe other types of crimes. This context may inhibit reporting of incidents that the respondent does not think of as criminal, did not report to the police, or does not want to report to police.

Data Collection Modes and Methods

Data collection mode can have important consequences for total survey quality. The mode affects the context of a survey: it affects questionnaire construction, the amount and type of communication with respondents, and the completion rate, among others. Considerable survey research regarding mode effects in surveys has been conducted. One of the most relevant, Tourangeau and Smith (1996), compared three methods (computer-assisted personal interviewing [CAPI], computer-assisted self-administered inter-

Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×

viewing [CASI], and audio computer-assisted self-administered interviewing [ACASI]) of collecting survey data about sexual behaviors and other sensitive topics. Tourangeau and Smith (1996, p. 275) conclude

The three mode groups did not differ in response rates, but the mode of data collection did affect the level of reporting of sensitive behaviors: both forms of self-administration tended to reduce the disparity between men and women in the number of sex partners reported. Self-administration, especially via ACASI, also increased the proportion of respondents admitting that they had used illicit drugs.

Thus a choice of data collection mode is very important when dealing with sensitive questions. A question may involve a potentially “socially undesirable” response. If an interviewer is asking the question, hearing the answer, and perhaps probing for more information, then the respondent may be concerned about the interviewer’s approval or disapproval. Thus, a self-administered mode of collection generally provides respondents with less motivation to misreport on sensitive questions. In a review of reporting errors in surveys, Tourangeau and Yan (2007, p. 867) conclude

[F]indings on mode difference in reporting of sensitive information clearly point a finger at the interviewer as a contributor to misreporting. It is not that the interviewer does anything wrong. What seems to make a difference is whether the respondent has to report his or her answers to another person.

The NCVS is interviewer administered. When the NCVS began, it relied more on in-person interviews with household members. This is still the method used for the first wave interviews. Beginning in 1980, cost considerations led BJS to use telephone interviewing (by the field representative) in subsequent waves, and telephone interviewing is now encouraged in all but wave 1. Approximately 57 percent of all within-unit interviews are conducted over the telephone. Because this percentage includes wave 1 interviews (which are primarily conducted in person) the percentage of telephone interviews for all subsequent waves is higher.

Yu, Stasny, and Lin (2008) reported a mode effect in the NCVS, with rape reported at a rate 1.45 times higher in personal interviews compared to telephone interviews. Using Bayesian methods, the authors estimated the probabilities that a personal crime that had occurred was not reported in the interview. “Thus for interviews conducted over the telephone with women who are victims of any type of personal crime (except for personal larceny), we estimate that approximately 37% of the women did not report the victimization” (Yu, Stasny, and Lin, 2008, p. 681). This analysis used unweighted data from the 1998 to 2004 NCVS for women respondents 16

Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×

years of age and older. (They also used 1993 to 1997 data as prior information in their Bayesian models.)

Privacy

The research findings on survey mode and asking sensitive questions raise a major concern with the current methods of data collection on the NCVS for measuring rape and sexual assault—a lack of privacy. As noted above, the NCVS is interviewer administered, with 43 percent of all interviews (including wave 1) conducted in person. The protocol involves a personal visit by the field representative to the selected address and an interview with each household member who is 12 years of age and older. The interviewing manual for field representatives on administering the NCVS states (U.S. Census Bureau and Bureau of Justice Statistics, 2008, p. A207):

If nonhousehold [emphasis added] members are present, either in a sample housing unit or a group quarters, ask the respondent if he/she wishes to be interviewed in private. If so, make the necessary arrangements to either interview the person elsewhere or at a different time. Some respondents may prefer not to be interviewed while other household members are present. Always respect the respondent’s need for a private interview.

Thus, the interviewer manual indicates that some respondents may prefer a private interview but does not direct the field representative to ask unless nonhousehold members are present. The training material used in the refresher training in 2011 did not cover the need for privacy during individual interviews (U.S. Census Bureau and Bureau of Justice Statistics, 2011a, 2011b).

The panel believes that privacy in interviewing about sexual violence is critical because most rapes and sexual assaults are committed by individuals whom the victim knows. The offender may, in fact, be member of the household. Another possibility is that a teenager has been a victim of date rape but has not told his or her parents. A respondent who has been sexually victimized may not report the victimization if that reporting may be overheard or otherwise inferred by another household member. This concern goes beyond whether there is another household member in the same room during the interview, to the situation in which the interview can be overheard from another room in the home, to the situation in which another household member may notice that the victim’s interview lasted longer than the one in which he or she participated. As Tourangeau and Yan (2007, p. 862) conclude, “respondents may be reluctant to report sensitive information in surveys partly because they are worried that the information may be accessible to third parties,” as outlined above. Other researchers have concluded that the effect of the presence of others when

Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×

responding to sensitive questions is dependent on whether the bystander already knows the information that is being requested (Aquilino, Wright, and Supple, 2000).

Tourangeau and Yan (2007) reviewed research on the effect of the presence of others in reporting on sensitive questions. The results were mixed and very situational. They found that a spouse’s presence did not appear to have a significant overall effect on survey responses, but they found a highly significant effect of parental presence, which reduced socially undesirable responses.

Yu, Stasny, and Li (2008) found that the presence of a spouse during an NCVS interview likely led to the underreporting of incidents of rape and sexual assault. The authors used data from the 1998 to 2004 NCVS for women respondents 16 years of age and older. (They also used 1993 to 1997 data as prior information in their modeling.) They categorized personal interviews by “who was present” during the interview, coded by the field representative: (i) spouse and no one else, (ii) spouse and at least one other person, (iii) at least one person but not the spouse, and (iv) no one else present. Telephone interviews were categorized as “unknown” because the field representative did not know who might be present on the other end of the phone line. In an analysis of unweighted data, Yu, Stasny and Lin (2008, p. 671) found that “compared with a woman who was interviewed alone, rape (including rape, attempted rape, and sexual assault) was reported about one-fifth as frequently when a spouse was present.” As discussed in an earlier section of this report, they also reported a mode effect, with rape reported at a rate 1.45 times higher in personal interviews compared to telephone interviews. They referred to a telephone interview or the presence of the spouse in a personal interview as a “gag factor” (Yu, Stasny, and Lin, 2008, p. 666). Using Bayesian methods, the authors estimated the probabilities that a crime was not reported in the interview. “Thus for interviews with women who are victims of rape and whose spouse was present during the interview, we estimate that 86% of the women did not report the victimization” (Yu, Stasny, and Lin, 2008, p. 681).

Several factors make privacy an elusive goal in the NCVS data collection. First, a dwelling may not have a private location where other household members neither see nor hear what is going on. Second, rape and sexual assault are two relatively low-incident criminal victimizations among the many more victimizations that the NCVS measures. Most of the other victimizations involve less sensitive questions, and the field representative’s main goal is to get a completed questionnaire from each household member. The training for interviewers does not stress the need for privacy, and the field representative is likely to view the need to have a completely private conversation as secondary to getting the completed interviews. Third, each household member (12 years of age and older) is interviewed

Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×

and therefore knows what the others are being asked. This fact, in itself, may cause a victim to feel intimidated when asked to disclose experiences of sexual violence.

Telephone interviewing of household members may offer some privacy. Because field representatives usually make these telephone calls from their own homes, they are directed to make sure that their own family members or neighbors cannot listen to the telephone call. Again, the interviewer manual does not direct the field representative to ask the respondent to try to find a private location in the home (U.S. Census Bureau and Bureau of Justice Statistics, 2008). Moreover, there may not be such an area where other household members cannot hear the respondent’s side of the conversation. The NCVS requires the respondent to describe events in the incident report, which might be overheard by other household members. A respondent may also have concern that another household member may try to listen on an extension. The research on this issue is well summarized by Tourangeau and Yan (2007, p. 867):

[The] findings on this issue [telephone interviewing of sensitive questions] are not completely clear, but taken together, they indicate that the interviewer’s physical presence is not the important factor…. On the whole, the weight of the evidence suggests that the telephone interviews yield less candid reporting of sensitive information.

CONCLUSION 8-8 The current data collection mode and methods of the National Crime Victimization Survey do not provide adequate privacy for collecting information on rape and sexual assault. This lack of privacy may be a major reason for underreporting of such incidents.

Interviewer-Respondent Interactions

The NCVS is an interviewer-administered survey. As such, the interaction between the interviewer and the respondent during the interview heavily influences the quality of survey responses. In this section, the report looks at issues associated with interviewers, including gender, training and preparation, and monitoring.

Gender

As discussed above, the presence of an interviewer may lead to misreporting on certain sensitive questions if the respondent is reluctant to talk about socially undesirable opinions or incidents. If an interviewer is administering the survey, then the gender of the interviewer may also influence (either positively or negatively) a respondent who is asked a sensitive question. Catania (1997) found higher item-level response rates to ques-

Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×

tions regarding same-sex sexual experiences to interviewers of their own gender. Another study examined whether the gender of the voice in ACASI might affect responses to a set of sensitive questions asked of young adults. The finding suggests that female interviewers may get more accurate reports (Dykema et al., 2012, p. 312):

[There were] higher levels of engagement in the behaviors and more consistent reporting among males when responding to a female voice, indicating that males were potentially more accurate when reporting to the female voice. Reports by females were not influenced by the voice’s gender.

A standard operational practice on surveys of sexual conduct or violence has been to use female interviewers. Female interviewers were used exclusively in the National Women’s Study, the National College Women Sexual Victimization Study, and the National Intimate Partner and Sexual Violence Survey (discussed in Chapter 5). The National Violence Against Women Study (discussed in Chapter 5) incorporated a test of interviewer gender, using female interviewers for female respondents, and using both male and female interviews for male respondents (Tjaden and Thoennes, 2000). The NCVS uses mostly, but not exclusively, female interviewers.8 The panel agrees with this standard practice, but believes that additional research is needed for definitive answers regarding the effect of an interviewer’s gender separate from other factors. Survey organizations are increasingly coding the demographic characteristics of interviewers (such as gender and age) that might affect recruiting and response quality so that possible effects can be more thoroughly studied. The results from these efforts will be important for the design of all surveys on sensitive topics.

Training and Preparation

Interviewers need to receive high-quality training to reduce interviewer effects and deliver survey responses of high quality. The Census Bureau understands this important aspect of the survey process and strives to train its field representatives appropriately for these complex surveys. However, there are two issues with the training provided to interviewers on the NCVS: overall training and the rarity of the incidents of interest.

The first issue is that the overall training effort on the NCVS has been inadequate. Refresher training of interviewers on the NCVS was eliminated during a 10-year period due to budget restrictions. The agencies acknowledged the problem (U.S. Census Bureau and Bureau of Justice Statistics, 2011a, pp. 1-5):

____________

8The Census Bureau faces issues related to equal employment opportunities when considering hiring based on gender.

Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×

[G]eneral performance reviews and refresher training were eliminated. So while the survey remained in the field and we were still able to generate annual crime estimates, we (Census) and the BJS had limited ability to monitor the quality of the data collected and to ensure that our field staff fully understood what was expected of them.

Fortunately, some training is being restored (Bureau of Justice Statistics, 2012a, p. 11):

Beginning in August 2011, refresher training of all field representatives (FR) was conducted using an experimental split sample cluster design. This was the first comprehensive refresher training that had been conducted since the 1990s. To maintain consistent year-to-year comparisons, Census and BJS implemented the experiment in a manner that isolated the effects of training without contaminating the annual 2011 estimates.

An issue with the training of interviewers is that rape and sexual assault is only one type of victimization among many on the NCVS questionnaire, and it is rarely reported. However, questions that ask about this topic require special sensitivity from interviewers. The NCVS refresher training for field representatives was 1.5 days in length, during which the NCVS screener was discussed for only 2 hours. Moreover, that 2 hours included not only discussion of methods of asking sensitive screener questions but also many other issues. The trainers provided a number of useful suggestions to follow when interviewing victims of sexual or other sensitive crimes (see Box 8-2). The panel applauds this refresher training, which covered many facets of the NCVS. However the limited time devoted to training on asking sensitive questions, the need for privacy in asking those sensitive questions, and a fuller understanding of sexual victimization did not get the emphasis that is needed in order to ensure complete reporting. And even if adequate training could be provided, such training would not be reinforced through the day-to-day survey process because the NCVS is a general-purpose criminal victimization survey, and an interviewer very infrequently gets a positive response on questions about rape and sexual assault.

CONCLUSION 8-9 The current training for National Crime Victimization Survey interviewers with regard to the subject of rape and sexual assault is insufficient to ensure complete and accurate responses. Moreover, because interviewers only infrequently encounter reports of these crimes, they do not get the opportunity to practice and reinforce the training that they do receive.

Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×

BOX 8-2
Suggestions When Interviewing Victims of Sexual or Other Sensitive Crimes Provided to National Crime Victimization Survey (NCVS) Interviewers During Refresher Training in 2011

•   Be sensitive to what the respondent is telling you; however, keep the respondent on track because some respondents may have the tendency to tell you more than what is being asked.

•   Be respectful and polite to victims, even to those who do not want to talk.

•   Avoid unnecessary pressure. Be patient.

•   Be supportive and let victims express their emotions, which may include crying or angry outbursts.

•   Be careful not to appear overprotective or patronizing.

•   Avoid judging victims or personally commenting on the situation.

•   Remind the respondents periodically, if necessary, about the importance of their responses.

•   Reassure the respondent that knowing the prevalence of the form of violence they are experiencing will be useful to expand efforts to identify ways to help victims of that type of crime and to hold perpetrators accountable.

•   Supply the respondent with a copy of the NCVS-110 Fact Sheet brochure (show example), which contains several hotline numbers that they may find helpful to call if the person asks for assistance. Make sure you have an ample supply of the Fact Sheet to provide to respondents when needed.

SOURCE: U.S. Census Bureau and Bureau of Justice Statistics (2011a, pp. 2-33-2-34).

Monitoring

Monitoring of interviews is a method of ensuring quality control over the interview process, improving interviewer performance and improving data quality. It is a standard practice for central location telephone interviews. It is more limited with field interviewing and with decentralized telephone interviewing. Thissen et al. (2009, p. 2) provide an overview of the classic techniques of monitoring in field data collection and their advantages and disadvantages. These techniques include in-person observation; post-interview discussions with interviewers; verification contact, by telephone or in-person reinterview; review of response data and timers; and tape recording during interview.

The NCVS is collected with a combination of field and telephone interviews conducted by the field interviewers. Thus, there is currently no

Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×

centralized telephoning that is monitored on a continuing basis. Previously, a periodic NCVS quality control recontact had been conducted as part of interviewer evaluations, but this process was suspended over several years (along with refresher training) because of budget constraints. In this process, the reinterviewer verified several things (U.S. Census Bureau and Bureau of Justice Statistics, 2010):

•   the correct sample units were interviewed,

•   the listing sheets were completed or updated properly,

•   the household screens were completed or updated properly,

•   all screen questions were asked and all answers recorded, and

•   any noninterviews were classified accurately.

The research on interviewer monitoring came mostly from centralized telephone interviewing because field interviewing relied almost exclusively on a “verification contact” until the introduction of computer audio-recorded interviewing (CARI) in 1989.9 CARI is a laptop computer software application that unobtrusively digitally records the audio exchange between an interviewer and a respondent during interviews. The software is programmed so that individual questions or sections will automatically be recorded for quality review. After the interview is completed, the audio files are downloaded and transmitted to the central program staff for coding and review.

The CARI technology not only records interviewer-respondent verbal interactions but also ensures that the description of the interview is not biased:

1.   It records unobtrusively because the microphone is built into the computer;

2.   The microphone is activated at the appropriate points by the computer program not the interviewer, which not only reduces intrusiveness but also makes the recording independent of the interviewer; and

3.   The digital recordings are exported as audio files of individual questions that can be sorted by question, respondent, or interviewer, which permits rapid and efficient purposive review.

In a feasibility report, Biemer et al. (2000, p. 1) identified the range of applications, including

____________

9The method was developed and pioneered by RTI. It was first deployed in a national field study in the 1989 National Survey of Child and Adolescent Well-Being.

Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×

•   detecting gross departures from appropriate procedures, including interview fabrication;

•   evaluating interviewer execution of interviewing guidelines, which permits corrective feedback for future interviews as well data quality control for existing interviews;

•   identifying questionnaire problems and data collection difficulties using interviewer-respondent interaction coding; and

•   collecting verbatim responses to open-ended questions in an interview.

These applications include all of the goals of monitoring within a centralized telephone-interviewing facility with the exception of immediate feedback.

The researchers (Biemer et al., 2000) found that the CARI-based approach was less expensive than other traditional approaches for interview verification of field interviews: 23 percent less expensive than face-to-face follow-up and 32 percent less expensive than a telephone-and postcard-based approach. However, these analyses ignored the system development costs of a functional CARI system.

It is particularly noteworthy that the CARI system was piloted on an extremely sensitive survey, the National Survey of Child and Adolescent Well-Being, which is a panel survey of 6,700 children who are the subjects of reports of abuse and neglect. The study required both signed and audio-recorded consent to use CARI. Consent to use CARI was obtained in 85 percent of the caseworker interviews, 83 percent of the caregiver interviews, and 82 percent of the child interviews.

CONCLUSION 8-10 Monitoring of interviewers is important to ensure quality and to identify areas in which an individual interviewer needs reinforcement and areas in which improved training is needed. The monitoring method used in the National Crime Victimization Survey, periodic reinterviews of selected respondents, is not adequate to ensure interviewing quality.

Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×
Page 127
Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×
Page 128
Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×
Page 129
Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×
Page 130
Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×
Page 131
Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×
Page 132
Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×
Page 133
Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×
Page 134
Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×
Page 135
Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×
Page 136
Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×
Page 137
Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×
Page 138
Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×
Page 139
Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×
Page 140
Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×
Page 141
Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×
Page 142
Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×
Page 143
Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×
Page 144
Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×
Page 145
Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×
Page 146
Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×
Page 147
Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×
Page 148
Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×
Page 149
Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×
Page 150
Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×
Page 151
Suggested Citation:"8 Potential Sources of Error: Nonresponse, Specification, and Measurement." National Research Council. 2014. Estimating the Incidence of Rape and Sexual Assault. Washington, DC: The National Academies Press. doi: 10.17226/18605.
×
Page 152
Next: 9 Synopsis of Potential Errors in the National Crime Victimization Survey »
Estimating the Incidence of Rape and Sexual Assault Get This Book
×
Buy Paperback | $54.00 Buy Ebook | $43.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The Bureau of Justice Statistics' (BJS) National Crime Victimization Survey (NCVS) measures the rates at which Americans are victims of crimes, including rape and sexual assault, but there is concern that rape and sexual assault are undercounted on this survey. BJS asked the National Research Council to investigate this issue and recommend best practices for measuring rape and sexual assault on their household surveys. Estimating the Incidence of Rape and Sexual Assault concludes that it is likely that the NCVS is undercounting rape and sexual assault. The most accurate counts of rape and sexual assault cannot be achieved without measuring them separately from other victimizations, the report says. It recommends that BJS develop a separate survey for measuring rape and sexual assault. The new survey should more precisely define ambiguous words such as "rape," give more privacy to respondents, and take other steps that would improve the accuracy of responses. Estimating the Incidence of Rape and Sexual Assault takes a fresh look at the problem of measuring incidents of rape and sexual assault from the criminal justice perspective. This report examines issues such as the legal definitions in use by the states for these crimes, best methods for representing the definitions in survey instruments so that their meaning is clear to respondents, and best methods for obtaining as complete reporting as possible of these crimes in surveys, including methods whereby respondents may report anonymously.

Rape and sexual assault are among the most injurious crimes a person can inflict on another. The effects are devastating, extending beyond the initial victimization to consequences such as unwanted pregnancy, sexually transmitted infections, sleep and eating disorders, and other emotional and physical problems. Understanding the frequency and context under which rape and sexual assault are committed is vital in directing resources for law enforcement and support for victims. These data can influence public health and mental health policies and help identify interventions that will reduce the risk of future attacks. Sadly, accurate information about the extent of sexual assault and rape is difficult to obtain because most of these crimes go unreported to police. Estimating the Incidence of Rape and Sexual Assault focuses on methodology and vehicles used to measure rape and sexual assaults, reviews potential sources of error within the NCVS survey, and assesses the training and monitoring of interviewers in an effort to improve reporting of these crimes.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!