8


Potential Sources of Error:
Nonresponse, Specification,
and Measurement

This chapter continues the analysis in Chapter 7 of potential sources of error in the National Crime Victimization Survey (NCVS), covering nonresponse error, specification error, and measurement error.

NONRESPONSE ERROR

Nonresponse error in surveys arises from the inability to obtain a useful response to all survey items from the entire sample. A critical concern is when that nonresponse leads to biased estimates. Nonresponse bias is a product of the difference between respondents and nonrespondents on a particular measure and the size of the nonresponse population. A lower response rate increases the potential for greater nonresponse bias, but when the data are missing at random, a lower response rate will neither create nor increase nonresponse error.

The NCVS, like most federal household surveys, is voluntary and not required by law. The challenges facing today’s federal household surveys were recently summarized by the National Research Council (2013a, p. 68):

[They] include maintaining adequate response from increasingly busy and reluctant respondents. More and more households are non-English speaking, and a growing number of higher income households have controlled-access residences….Today’s household surveys face confidentiality and privacy concerns, a public growing more suspicious of its government, and competition from an increasing number of private as well as government surveys vying for the public’s attention.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 127
8 Potential Sources of Error: Nonresponse, Specification, and Measurement T his chapter continues the analysis in Chapter 7 of potential sources of error in the National Crime Victimization Survey (NCVS), cover- ing nonresponse error, specification error, and measurement error. NONRESPONSE ERROR Nonresponse error in surveys arises from the inability to obtain a use- ful response to all survey items from the entire sample. A critical concern is when that nonresponse leads to biased estimates. Nonresponse bias is a product of the difference between respondents and nonrespondents on a particular measure and the size of the nonresponse population. A lower response rate increases the potential for greater nonresponse bias, but when the data are missing at random, a lower response rate will neither create nor increase nonresponse error. The NCVS, like most federal household surveys, is voluntary and not required by law. The challenges facing today’s federal household surveys were recently summarized by the National Research Council (2013a, p. 68): [They] include maintaining adequate response from increasingly busy and reluctant respondents. More and more households are non-English speak- ing, and a growing number of higher income households have controlled- access residences. . . .Today’s household surveys face confidentiality and privacy concerns, a public growing more suspicious of its government, and competition from an increasing number of private as well as government surveys vying for the public’s attention. 127

OCR for page 127
128 ESTIMATING THE INCIDENCE OF RAPE AND SEXUAL ASSAULT These challenges mean that maintaining a high level of response on a large voluntary national survey is difficult. This section examines the non- response profile of the NCVS, looking at both the level of nonresponse and its potential effect on the measured rate of sexual victimization. Nonresponse can arise at several points in the process of sample re- cruitment. In the NCVS, the household address is selected, and then each household member (12 years of age and older) is asked to complete the survey. Nonresponse on a questionnaire (unit nonresponse) can occur at two stages. Household nonresponse occurs when no one living at the selected housing unit responds in the data collection wave. Person-level nonresponse occurs when some eligible persons in the household respond and some do not respond. In addition, a household or person may respond on some waves but not on all waves. In the NCVS, a household responding to at least one wave of the NCVS is counted as a household respondent for the survey. Likewise, a person who is interviewed in one or more waves is called a person respondent. Finally, item nonresponse (as opposed to unit nonresponse) can also occur for person respondents when some questions on the questionnaire were not completed that should have been completed. This section looks in more depth at the person-level nonresponse at both the unit and item level. Unit-Level Nonresponse The NCVS has maintained a moderately high level of survey (unit) response at both the household level and the person level (see Table 8-1). In 2011, 79,800 households participated in the NCVS, representing a 90 percent household response rate for the year.1 The person-level response rate (most important for victimization rates) was 88 percent in 2011. Response rates have decreased several percentage points over the decade, but not substantially (see Table 4-2 in Chapter 4). These response rates are consistent with several other important federal household surveys in 2011.2 Nonresponse in a survey may be “missing at random” (MAR), mean- 1  It appears that the Census Bureau is defining the housing unit response rate as the num- ber of housing units that participated at one or more waves during the year divided by the number that should have participated during the year. This is an inflated number because if a housing unit participated in January but not in July (or vice versa), then it is still counted as a respondent for the year. A better measure of the response rate is the number of times housing units participated divided by the number of times housing units were eligible to participate. We believe this response rate calculation is a better indicator of the potential for (or risk of) nonresponse bias than the current way the response rate is calculated. 2  For comparison, in 2010 the Current Population Survey had monthly household response rates of 91-93 percent; the American Community Survey had a household response rate of 98 percent; the National Health Interview Survey had a household response rate of 82 percent; and the Consumer Expenditure Survey had a household response rate of 73 percent.

OCR for page 127
POTENTIAL SOURCES OF ERROR 129 TABLE 8-1  National Crime Victimization Survey Response Rates for Households and Individuals Household Level Person Level Response Total Responding Response Year Responding Rate Persons Persons Rate 1996 45,000 93 NA 85,330 91 1997 42,910 95 NA 79,470 90 1998 43,000 94 NA 78,900 89 1999 43,000 93 204,915 77,750 89 2000 43,000 93 207,800 79,710 90 2001 44,000 93 208,598 79,950 89 2002 42,000 92 203,061 76,050 87 2003 42,000 92 201,388 74,520 86 2004 42,000 91 202,771 74,500 86 2005 38,600 91 181,009 67,000 84 2006 38,000 91 179,717 67,650 86 2007 41,000 90 170,869 73,650 86 2008 42,093 90 155,704 77,852 86 2009 38,728 92 157,796 68,665 87 2010 40,974 92 NA 73,283 88 2011 79,800 90 NA 143,120 88 SOURCES: Data from Bureau of Justice Statistics (1997, 1998, 1999, 2000, 2001, 2002a, 2003, 2004, 2005, 2006, 2007, 2008a, 2009, 2010, 2011, 2012). ing that the decision not to respond on the survey is unrelated to key study outcome measures, such as crime victimization, and that reweighting of the responding units may suffice to adjust for the missing data. The presence of this type of nonresponse, when appropriately reweighted, does not cause a bias, but does reduce sample size and increase sampling error. Other unit nonresponse is judged to be “not missing at random” (NMAR) and thus is more of a problem because it can produce bias in the estimates as well as increase the sampling error. If the nonresponse varies with key outcome measures and their covariates (such as race, income, or geographic area), then the nonresponse may be MAR within groups formed based on these covariates. In this case, reweighting might be done within the groups, thus reducing potential nonresponse bias. Because of the panel nature of the NCVS, considerable information is known about the demographics of selected households and individual household members if they respond at least once over the 3-year life of the panel. The Bureau of Justice Statistics (BJS) uses this information for much more than just adjusting for nonresponse. For example, one adjust-

OCR for page 127
130 ESTIMATING THE INCIDENCE OF RAPE AND SEXUAL ASSAULT TABLE 8-2  Survey-Level Nonresponse on the National Crime Victimization Survey Judged to Be Missing at Random (MAR), by Subgroups Percentage of Nonresponses Total Counts of Survey-Level Subgroup Judged MAR Nonrespondents Judged MAR All 81.10 2,762 Male 84.04 1,327 Female 83.43 1,435 Black 84.81 469 Other 80.43 2,294 25 years of age and younger 84.11 323 25 years of age and older 83.74 2,441 SOURCE: NORC at the University of Chicago (2009, p. 19, Table 2.5). ment is to “inflate sample point estimates to known population totals to compensate for survey nonresponse and other aspects of the sample design” (Bureau of Justice Statistics, 2008b, p. 12). (See Chapter 4 for more detail on this and other adjustments.) The success of the BJS adjustment processes in addressing potential unit-level nonresponse bias in the NCVS was examined by NORC at the University of Chicago (2009) in an extensive study with several parts. In one part, NORC conducted a capture-recapture analysis across panel waves to obtain relative counts of different categories of nonrespondents. This technique separates the chronic nonresponders across the 3 years from the occasional and frequent responders, hypothesizing that the chronic nonresponders were potentially NMAR. Based on the above assumption as to which respondents were NMAR, the NORC report estimates that 81 percent of the nonrespondents are not chronic nonresponders and may be assumed to be MAR (see Table 8-2). Using the term “ignorable” for MAR and nonignorable for NMAR nonresponse, the report (NORC at the Uni- versity of Chicago, 2009, p. 16) concludes: Overall, more that 80 percent of the nonresponses in NCVS can be re- garded as “ignorable.” Proportionately, more nonresponses by male, black, and young (age 25 or less) eligible interviewees are ignorable. The largest of variation occur for the race/ethnicity, with eligible black inter- viewees having proportionately more ignorable nonresponses (84.81% vs. 80.43%). NORC points out that its techniques did not allow analysis of nonresponse in the first round of the panel. In a subsequent part of the report, NORC developed log linear models

OCR for page 127
POTENTIAL SOURCES OF ERROR 131 to predict response disposition for key subgroups. The models examined “easy versus hard” responder characteristics. Finally, NORC made county- level comparisons between the statistics from the Uniform Crime Reports and the NCVS pooled across year. The report’s conclusion (NORC at the University of Chicago, 2009, p. 47) is “little evidence for nonresponse bias after the first round of the survey. . . . The within unit nonresponse is weight adjusted to age and race controls in the NCVS and these seem to be the categories that are the main drivers in any potential nonresponse bias.” The panel has important reservations about some of the NORC analy- sis and conclusions. The capture-recapture analysis is based on the assump- tion that individuals who respond at least once but not routinely on the NCVS are MAR. This assumption appears to go untested and yet underpins NORC’s overall analysis. Another limitation is that the logistic modeling techniques used in the study only looked at a few standard demographic characteristics. Finally, it is unclear whether this broad look at nonresponse on the NCVS paints the same picture as would an analysis of the subpopu- lations that are at greater risk for sexual violence. CONCLUSION 8-1 The overall unit response rates, as calculated, on the National Crime Victimization Survey are moderately high and have been reasonably stable over the past 10 years. Although an independent analysis concluded that the methods that the Bureau of Justice Statistics uses to adjust for nonresponse appear to provide a satisfactory correc- tion for nonresponse bias at the unit level, our panel has reservations about that analysis and remains concerned that there may be a nonre- sponse bias related to sexual victimization. Panel Attrition Panel attrition is a response pattern in surveys with multiple waves of data collection in which a respondent’s propensity to respond decreases over these waves. Because the NCVS is a panel survey with seven waves of data collection over 3 years, it is important to examine the nonresponse pattern across waves. There are many reasons that an individual may at- trite, including deciding to quit reporting, not being available during the data collection period, or moving to a different address. BJS does not provide NCVS response rates by wave. To get some sense of attrition rates, the panel calculated unweighted response rates (at the person level)3 using data for 2007-2008 by time in sample (see Figure 8-1). 3  The first wave person-level response rate is the proportion of persons participating at first wave among sampled, eligible persons at first wave. The person-level attrition rate at wave t >1 is the proportion of persons who participated at first wave who also participated at wave t.

OCR for page 127
132 ESTIMATING THE INCIDENCE OF RAPE AND SEXUAL ASSAULT 90 80 70 Percentage Responding 60 50 40 30 20 10 0 1 2 3 4 5 6 7 ALL Time in Sample FIGURE 8-1  National Crime Victimization Survey person-level attrition rates (un- weighted) for the period 2007-2008. NOTES: Time in sample (TIS) 1 is the response rate at the initial wave, TIS 2-7 is the response rate given response at TIS 1, and ALL is the proportion of eligible person responding to all seven waves. Figure 8-1 SOURCE: Data from National Crime Victimization Survey, 2007-2008. These attrition rates were calculated at the person level using linked longi- tudinal files. One can see substantial attrition in response rates over time, with less than half the sample responding in all waves. The NORC at the University of Chicago (2009) report provides insight into this panel attrition by subgroups. The report’s analysis is based on the total number of waves in which a respondent participated, without an ordering of those waves over time. Looking at the age of the respondents, the analysis found that younger respondents participated in fewer waves than did older respondents (see Figure 8-2).4 Approximately 15 percent of 4  Data included only individuals who had participated in the first wave.

OCR for page 127
POTENTIAL SOURCES OF ERROR 133 100 90 80 70 60 Percentage 50 40 30 20 10 0 Age Group FIGURE 8-2 Participation in National Crime Victimization Survey waves by re- spondents’ ages, 2005-2006. SOURCE: NORC at the University of Chicago (2009, p. 29, Chart 3.3). respondents 25 years of age and younger participated in all seven waves; in contrast, approximately 45 percent of respondents 55 years of age and older did so. And as can be seen in the figure, nearly 30 percent of respon- dents 25 years of age and younger did not participate after the first wave. The NORC at the University of Chicago (2009) report also looks at re- sponse by household structure (see Figure 8-3). Individuals living as couples (couple only, couple with kids and family, couple with others) responded in more waves than did individuals who were not identified as being part of a couple (male with relatives, male with others, female with relatives, female with others). The results shown in both of these figures provide particular concern for the estimation of rape and sexual assault because the low responders—par- ticularly young people and females who are not part of a couple—appear to be more at risk for being victims of those crimes. In a multivariate analysis of subgroup risk among females for rape and sexual assault (Lauritsen, 2012), younger people (in the age groups 12 to 17, 18 to 34, and 35 to 49) have a higher odds ratio than do older (50+) individuals (see Table 8-3). Females who are not part of a couple (widowed, divorced, separated, and

OCR for page 127
134 ESTIMATING THE INCIDENCE OF RAPE AND SEXUAL ASSAULT 100 80 Percentage 60 40 20 0 ily s s s s s ly er er er ve ve On m th th th aƟ aƟ Fa /O /O /O e el el pl d /R /R u w w w an Co w w e e e s al al pl id e e M m u al al /K Co Fe M m w Fe e u pl Co Household Structure FIGURE 8-3  Participation in National Crime Victimization Survey waves by family structure, 2002-2006. SOURCE: NORC at the University of Chicago (2009, p. 28, Chart 3.2). never married) have a higher odds ratio than do married women. Planty et al. (2013) provide similar results. Thus, attrition rates are higher in several subgroups that appear to be at higher risk for sexual violence. It is unclear whether this is a related effect. Figure 8-3 One could argue that someone who has been sexually victimized may be TABLE 8-3  Risk for Rape and Sexual Assault for Females, by Age and Marital Status, National Crime Victimization Survey, 1994-2009 95% Confidence Odds Ratio Interval Significance Age (in comparison with 50+) 35 to 49 4.6 [3.31, 6.38] * 18 to 34 8.7 [6.21, 12.22] * 12 to 17 9.23 [6.31, 13.40] * Marital status (in comparison with married) Widowed 2.48 [1.43, 4.29] Divorced 5.56 [4.44, 6.96] * Separated 10.51 [7.89, 14.00] * Never married 3.90 [3.12, 4.87] * *The odds ratios for rape and sexual assault are significantly greater than the odds ratios for other forms of serious violence. SOURCE: Lauritsen (2012, Table 6).

OCR for page 127
POTENTIAL SOURCES OF ERROR 135 less willing to respond on the next NCVS, knowing that questions regarding victimization will be asked. Similarly, one could argue that someone who has been sexually victimized may be more likely to move to a safer neigh- borhood, and thus no longer be an eligible respondent. The panel did not find data that could answer this question definitively, but there appears to be potential for a nonresponse bias that could contribute to underreporting of these victimizations. CONCLUSION 8-2 There appears to be notable panel attrition over the 3 years in the National Crime Victimization Survey (NCVS). This attrition is particularly problematic for estimating rape and sexual as- sault because some people at greater risk for being victimized by these crimes—young people and females not living as part of a couple—are also some of those most likely to drop out before the seven waves of the NCVS have been completed. CONCLUSION 8-3 Although the Bureau of Justice Statistics publishes annual response rates for the National Crime Victimization Survey (NCVS), the published data do not include important details of re- sponse, such as mode of data collection and attrition rate. Such details are needed by data users for a thorough assessment of the quality of NCVS estimates. Item Nonresponse Item nonresponse occurs when a respondent completes a substantial portion of a questionnaire (enough to count the interview as “complete”) but does not provide answers to certain key items. The panel could not find an analysis of item nonresponse on the NCVS in general, nor one specifi- cally for the questionnaire items regarding rape and sexual assault. Without such analysis, the panel relied on its collective experience and judgment about item response for key questions regarding sexual victimization. There is considerable evidence in survey research that respondents are reluctant to answer socially undesirable questions (Bradburn, 1983; Schaeffer, 2000; Tourangeau and Smith, 1996; Tourangeau and Yan, 2007). (See also the section on “Questionnaire” in this chapter.) The panel thinks that item “refusals” on these particular socially undesirable questions would be difficult to identity. If a respondent does not want to report a rape or sexual assault, or to talk about such an assault, then he or she is more likely to answer NO to the appropriate screening questions (he or she was not victimized) rather than more directly refusing to answer the question. In fact the screening questionnaire (Bureau of Justice Statistics, n.d.-d) has only check boxes for YES or NO for these questions, and no response box

OCR for page 127
136 ESTIMATING THE INCIDENCE OF RAPE AND SEXUAL ASSAULT for “refused” or “don’t know.” Thus, these item refusals are most likely disguised as legitimate zeros (there was no victimization). Panel surveys may create an additional nuance regarding item nonre- sponse. After going through one or more waves of the survey, a respondent learns that answering YES to a screening question will lead to a range of additional questions regarding the specific incident. Surveys with this re- peated pattern, and especially those with the pattern repeated across mul- tiple waves, are subject to “satisficing”: a respondent provides an answer (perhaps NO to a screening question) that moves the interviewer on to the next question, without necessarily being an accurate or complete response. This respondent conduct is hard to detect and measure, but the panel thinks it is likely that satisficing is occurring on the NCVS. CONCLUSION 8-4 The panel believes it is likely that item refusals on questions about sexual victimization on the National Crime Victim- ization Survey may be recorded as if they were “no” response rather than item nonresponse when a respondent does not want to report a victimization. Another possibility is for a respondent to sometimes an- swer “no” on screening questions simply to avoid additional questions in the survey. SPECIFICATION ERROR For any survey, its intended purpose and concepts must be clearly de- fined in order for survey instruments and procedures to accurately translate those concepts into the collection of data. In surveys, specification error may occur when there is a mismatch between what the survey is measuring and what it is intended to measure.5 As defined by Biemer (2010, p. 31): “specification error pertains specifically to the problem of measuring the wrong concept in a survey, rather than measuring the right concept poorly.” This section examines a key concept associated with the NCVS to see if it is clearly defined and consistent between the survey’s purposes and processes. This key concept is to identify if and when a respondent has been the victim of a rape or sexual assault. BJS has developed a clear definition of what the survey is intended to measure (see Box 8-1). In the omnibus screener that is currently used in the NVCS, the deliberate approach is to soften the link between the screening cues and any particular type of crimi- nal victimization. In particular, for rape and sexual assault, as BJS translates 5  This definition is different from that used by economists and other mathematical modelers, for whom “specification error” refers to an incorrect statement of an empirical model. We use the term differently in the report.

OCR for page 127
POTENTIAL SOURCES OF ERROR 137 BOX 8-1 Definitions of Rape and Sexual Assault Used on the National Crime Victimization Survey Rape—Forced sexual intercourse including both psychological coercion as well as physical force. Forced sexual intercourse means vaginal, anal, or oral penetration by the offender(s). This category also includes incidents where the penetration is from a foreign object such as a bottle. Includes attempted rapes, male as well as female victims, and both heterosexual and homosexual rape. At- tempted rape includes verbal threats of rape. Sexual Assault—A wide range of victimizations, separate from rape or at- tempted rape. These crimes include attacks or attempted attacks generally involv- ing unwanted sexual contact between victim and offender. Sexual assaults may or may not involve force and include such things as grabbing or fondling. Sexual assault also includes verbal threats. SOURCE: Bureau of Justice Statistics (n.d.-b). these specific concepts into data collection, the respondent is asked the fol- lowing question (Bureau of Justice Statistics, n.d.-d): Has anyone attacked or threatened you in any of these ways: • With any weapon, for instance, a gun or knife, • W  ith anything like a baseball bat, frying pan, scissors, or stick, • By something thrown, such as a rock or bottle, • Include any grabbing, punching or choking, • A  ny rape, attempted rape or other type of sexual attack, [emphasis added] • Any face to face threats, Or •  ny attack or threat or use of force by anyone at all. Please A mention it even if you are not certain it was a crime? The respondent also is asked a special follow-up question that focuses on how well the respondent knew the offender: Incidents involving forced or unwanted sexual acts are often difficult to talk about. (Other than any incidents already men-

OCR for page 127
142 ESTIMATING THE INCIDENCE OF RAPE AND SEXUAL ASSAULT This question describes a specific action (“putting a penis in your vagina”), which is more likely to be clearly understood than asking a re- spondent if he or she has been raped. This approach was reinforced in a recent discussion of research methods for measuring rape and sexual assault (Jaquier, Johnson, and Fisher, 2011, p. 27): The usefulness of behaviorally specific questions cannot be overempha- sized, not necessarily because they produce larger estimates of rape, but because they use words and phrases that describe to the respondent exactly what behavior is being measured. Using behaviorally specific screen ques- tions appears to cue more women to recall their experiences. Most of the studies that use behaviorally specific questions have mea- sured a higher rate of incidence of sexual violence (Fisher, 2009), and it is the panel’s judgment that the use of behaviorally specific questions im- proves communication with the respondent and facilitates more consistent responses. CONCLUSION 8-6 Words, such as “rape” and “sexual assault,” on the National Crime Victimization Survey may not be consistently understood by survey respondents. Other surveys have used more behaviorally specific words to describe a specific set of actions. More specific wording of questions would be understood more consistently by all respondents and thus lead to more complete and accurate answers. The NCVS is a criminal victimization survey. It is introduced that way to household members. Once an interview begins, the questionnaire goes through a listing of crimes, asking each respondent if he or she has been the victim of any of them. When asked questions about rape and sexual assault, it is clear that the interviewer is asking about a crime. In fact, the questions about rape and sexual assault are embedded among questions that are dominated by other crimes. For example, as noted above, the fol- lowing question is dominated by the descriptions of weapons and assaults.7 Rape and sexual assault, particularly when no weapon is involved, may appear to be less central to the line of inquiry than other forms of assault in this list (Bureau of Justice Statistics, n.d.-d). 7  The context and surrounding questions in a questionnaire may greatly affect responses on a survey. This was illustrated by Gibson et al. (1978, p. 251) in an experiment that added a series of attitude questions about crime to the National Crime Survey (NCS). They found that inclusion of the attitude supplement to the NCS had “a statistically significant and substantial impact on the victimization rates obtained.”

OCR for page 127
POTENTIAL SOURCES OF ERROR 143 Has anyone attacked or threatened you in any of these ways: • With any weapon, for instance, a gun or knife, • W  ith anything like a baseball bat, frying pan, scissors, or stick, • By something thrown, such as a rock or bottle, • Include any grabbing, punching or choking, • Any rape, attempted rape or other type of sexual attack, • Any face to face threats, OR •  ny attack or threat or use of force by anyone at all. Please A mention it even if you are not certain it was a crime? Most sexual violence is committed by someone known to the victim. The victim may not have contacted the police (it is estimated that between 65 and 80 percent of such violent incidents are not reported to police) and may not think of the incident as a crime. The respondent may also think that because she or he did not contact the police about the incident, it should not be reported on a government crime inquiry. A respondent may fail to respond for these reasons even though the current NCVS screener has a cue reminding that “people often do not think of incidents commit- ted by someone they know.” Alternatively, the respondent may understand that the sexual victimization was criminal but may fear reprisal or may not want to get the other person “in trouble.” Thus, the respondent may have reservations about answering questions about criminal incidents and the risk of disclosure to police. CONCLUSION 8-7 Questions about incidents of rape and sexual as- sault in the National Crime Victimization Survey are asked in the con- text of a criminal victimization survey and embedded within individual questions that describe other types of crimes. This context may inhibit reporting of incidents that the respondent does not think of as criminal, did not report to the police, or does not want to report to police. Data Collection Modes and Methods Data collection mode can have important consequences for total survey quality. The mode affects the context of a survey: it affects questionnaire construction, the amount and type of communication with respondents, and the completion rate, among others. Considerable survey research regard- ing mode effects in surveys has been conducted. One of the most relevant, Tourangeau and Smith (1996), compared three methods (computer-assisted personal interviewing [CAPI], computer-assisted self-administered inter-

OCR for page 127
144 ESTIMATING THE INCIDENCE OF RAPE AND SEXUAL ASSAULT viewing [CASI], and audio computer-assisted self-administered interviewing [ACASI]) of collecting survey data about sexual behaviors and other sensi- tive topics. Tourangeau and Smith (1996, p. 275) conclude The three mode groups did not differ in response rates, but the mode of data collection did affect the level of reporting of sensitive behaviors: both forms of self-administration tended to reduce the disparity between men and women in the number of sex partners reported. Self-administration, especially via ACASI, also increased the proportion of respondents admit- ting that they had used illicit drugs. Thus a choice of data collection mode is very important when dealing with sensitive questions. A question may involve a potentially “socially undesirable” response. If an interviewer is asking the question, hearing the answer, and perhaps probing for more information, then the respondent may be concerned about the interviewer’s approval or disapproval. Thus, a self-administered mode of collection generally provides respondents with less motivation to misreport on sensitive questions. In a review of reporting errors in surveys, Tourangeau and Yan (2007, p. 867) conclude [F]indings on mode difference in reporting of sensitive information clearly point a finger at the interviewer as a contributor to misreporting. It is not that the interviewer does anything wrong. What seems to make a differ- ence is whether the respondent has to report his or her answers to another person. The NCVS is interviewer administered. When the NCVS began, it relied more on in-person interviews with household members. This is still the method used for the first wave interviews. Beginning in 1980, cost con- siderations led BJS to use telephone interviewing (by the field representa- tive) in subsequent waves, and telephone interviewing is now encouraged in all but wave 1. Approximately 57 percent of all within-unit interviews are conducted over the telephone. Because this percentage includes wave 1 interviews (which are primarily conducted in person) the percentage of telephone interviews for all subsequent waves is higher. Yu, Stasny, and Lin (2008) reported a mode effect in the NCVS, with rape reported at a rate 1.45 times higher in personal interviews compared to telephone interviews. Using Bayesian methods, the authors estimated the probabilities that a personal crime that had occurred was not reported in the interview. “Thus for interviews conducted over the telephone with women who are victims of any type of personal crime (except for personal larceny), we estimate that approximately 37% of the women did not report the victimization” (Yu, Stasny, and Lin, 2008, p. 681). This analysis used unweighted data from the 1998 to 2004 NCVS for women respondents 16

OCR for page 127
POTENTIAL SOURCES OF ERROR 145 years of age and older. (They also used 1993 to 1997 data as prior informa- tion in their Bayesian models.) Privacy The research findings on survey mode and asking sensitive questions raise a major concern with the current methods of data collection on the NCVS for measuring rape and sexual assault—a lack of privacy. As noted above, the NCVS is interviewer administered, with 43 percent of all in- terviews (including wave 1) conducted in person. The protocol involves a personal visit by the field representative to the selected address and an in- terview with each household member who is 12 years of age and older. The interviewing manual for field representatives on administering the NCVS states (U.S. Census Bureau and Bureau of Justice Statistics, 2008, p. A207): If nonhousehold [emphasis added] members are present, either in a sample housing unit or a group quarters, ask the respondent if he/she wishes to be interviewed in private. If so, make the necessary arrangements to either interview the person elsewhere or at a different time. Some respondents may prefer not to be interviewed while other household members are pres- ent. Always respect the respondent’s need for a private interview. Thus, the interviewer manual indicates that some respondents may prefer a private interview but does not direct the field representative to ask unless nonhousehold members are present. The training material used in the refresher training in 2011 did not cover the need for privacy during individual interviews (U.S. Census Bureau and Bureau of Justice Statistics, 2011a, 2011b). The panel believes that privacy in interviewing about sexual violence is critical because most rapes and sexual assaults are committed by indi- viduals whom the victim knows. The offender may, in fact, be member of the household. Another possibility is that a teenager has been a victim of date rape but has not told his or her parents. A respondent who has been sexually victimized may not report the victimization if that reporting may be overheard or otherwise inferred by another household member. This concern goes beyond whether there is another household member in the same room during the interview, to the situation in which the interview can be overheard from another room in the home, to the situation in which another household member may notice that the victim’s interview lasted longer than the one in which he or she participated. As Tourangeau and Yan (2007, p. 862) conclude, “respondents may be reluctant to report sensitive information in surveys partly because they are worried that the information may be accessible to third parties,” as outlined above. Other researchers have concluded that the effect of the presence of others when

OCR for page 127
146 ESTIMATING THE INCIDENCE OF RAPE AND SEXUAL ASSAULT responding to sensitive questions is dependent on whether the bystander already knows the information that is being requested (Aquilino, Wright, and Supple, 2000). Tourangeau and Yan (2007) reviewed research on the effect of the pres- ence of others in reporting on sensitive questions. The results were mixed and very situational. They found that a spouse’s presence did not appear to have a significant overall effect on survey responses, but they found a highly significant effect of parental presence, which reduced socially undesirable responses. Yu, Stasny, and Li (2008) found that the presence of a spouse during an NCVS interview likely led to the underreporting of incidents of rape and sexual assault. The authors used data from the 1998 to 2004 NCVS for women respondents 16 years of age and older. (They also used 1993 to 1997 data as prior information in their modeling.) They categorized per- sonal interviews by “who was present” during the interview, coded by the field representative: (i) spouse and no one else, (ii) spouse and at least one other person, (iii) at least one person but not the spouse, and (iv) no one else present. Telephone interviews were categorized as “unknown” because the field representative did not know who might be present on the other end of the phone line. In an analysis of unweighted data, Yu, Stasny and Lin (2008, p. 671) found that “compared with a woman who was inter- viewed alone, rape (including rape, attempted rape, and sexual assault) was reported about one-fifth as frequently when a spouse was present.” As dis- cussed in an earlier section of this report, they also reported a mode effect, with rape reported at a rate 1.45 times higher in personal interviews com- pared to telephone interviews. They referred to a telephone interview or the presence of the spouse in a personal interview as a “gag factor” (Yu, Stasny, and Lin, 2008, p. 666). Using Bayesian methods, the authors estimated the probabilities that a crime was not reported in the interview. “Thus for inter- views with women who are victims of rape and whose spouse was present during the interview, we estimate that 86% of the women did not report the victimization” (Yu, Stasny, and Lin, 2008, p. 681). Several factors make privacy an elusive goal in the NCVS data col- lection. First, a dwelling may not have a private location where other household members neither see nor hear what is going on. Second, rape and sexual assault are two relatively low-incident criminal victimizations among the many more victimizations that the NCVS measures. Most of the other victimizations involve less sensitive questions, and the field represen- tative’s main goal is to get a completed questionnaire from each household member. The training for interviewers does not stress the need for privacy, and the field representative is likely to view the need to have a completely private conversation as secondary to getting the completed interviews. Third, each household member (12 years of age and older) is interviewed

OCR for page 127
POTENTIAL SOURCES OF ERROR 147 and therefore knows what the others are being asked. This fact, in itself, may cause a victim to feel intimidated when asked to disclose experiences of sexual violence. Telephone interviewing of household members may offer some privacy. Because field representatives usually make these telephone calls from their own homes, they are directed to make sure that their own family members or neighbors cannot listen to the telephone call. Again, the interviewer manual does not direct the field representative to ask the respondent to try to find a private location in the home (U.S. Census Bureau and Bu- reau of Justice Statistics, 2008). Moreover, there may not be such an area where other household members cannot hear the respondent’s side of the conversation. The NCVS requires the respondent to describe events in the incident report, which might be overheard by other household members. A respondent may also have concern that another household member may try to listen on an extension. The research on this issue is well summarized by Tourangeau and Yan (2007, p. 867): [The] findings on this issue [telephone interviewing of sensitive questions] are not completely clear, but taken together, they indicate that the inter- viewer’s physical presence is not the important factor…. On the whole, the weight of the evidence suggests that the telephone interviews yield less candid reporting of sensitive information. CONCLUSION 8-8 The current data collection mode and methods of the National Crime Victimization Survey do not provide adequate privacy for collecting information on rape and sexual assault. This lack of privacy may be a major reason for underreporting of such incidents. Interviewer-Respondent Interactions The NCVS is an interviewer-administered survey. As such, the inter- action between the interviewer and the respondent during the interview heavily influences the quality of survey responses. In this section, the report looks at issues associated with interviewers, including gender, training and preparation, and monitoring. Gender As discussed above, the presence of an interviewer may lead to mis- reporting on certain sensitive questions if the respondent is reluctant to talk about socially undesirable opinions or incidents. If an interviewer is administering the survey, then the gender of the interviewer may also influ- ence (either positively or negatively) a respondent who is asked a sensitive question. Catania (1997) found higher item-level response rates to ques-

OCR for page 127
148 ESTIMATING THE INCIDENCE OF RAPE AND SEXUAL ASSAULT tions regarding same-sex sexual experiences to interviewers of their own gender. Another study examined whether the gender of the voice in ACASI might affect responses to a set of sensitive questions asked of young adults. The finding suggests that female interviewers may get more accurate reports (Dykema et al., 2012, p. 312): [There were] higher levels of engagement in the behaviors and more consis- tent reporting among males when responding to a female voice, indicating that males were potentially more accurate when reporting to the female voice. Reports by females were not influenced by the voice’s gender. A standard operational practice on surveys of sexual conduct or vio- lence has been to use female interviewers. Female interviewers were used exclusively in the National Women’s Study, the National College Women Sexual Victimization Study, and the National Intimate Partner and Sexual Violence Survey (discussed in Chapter 5). The National Violence Against Women Study (discussed in Chapter 5) incorporated a test of interviewer gender, using female interviewers for female respondents, and using both male and female interviews for male respondents (Tjaden and Thoennes, 2000). The NCVS uses mostly, but not exclusively, female interviewers.8 The panel agrees with this standard practice, but believes that additional research is needed for definitive answers regarding the effect of an inter- viewer’s gender separate from other factors. Survey organizations are in- creasingly coding the demographic characteristics of interviewers (such as gender and age) that might affect recruiting and response quality so that possible effects can be more thoroughly studied. The results from these efforts will be important for the design of all surveys on sensitive topics. Training and Preparation Interviewers need to receive high-quality training to reduce interviewer effects and deliver survey responses of high quality. The Census Bureau understands this important aspect of the survey process and strives to train its field representatives appropriately for these complex surveys. However, there are two issues with the training provided to interviewers on the NCVS: overall training and the rarity of the incidents of interest. The first issue is that the overall training effort on the NCVS has been inadequate. Refresher training of interviewers on the NCVS was eliminated during a 10-year period due to budget restrictions. The agencies acknowl- edged the problem (U.S. Census Bureau and Bureau of Justice Statistics, 2011a, pp. 1-5): 8  The Census Bureau faces issues related to equal employment opportunities when consider- ing hiring based on gender.

OCR for page 127
POTENTIAL SOURCES OF ERROR 149 [G]eneral performance reviews and refresher training were eliminated. So while the survey remained in the field and we were still able to generate annual crime estimates, we (Census) and the BJS had limited ability to monitor the quality of the data collected and to ensure that our field staff fully understood what was expected of them. Fortunately, some training is being restored (Bureau of Justice Statistics, 2012a, p. 11): Beginning in August 2011, refresher training of all field representatives (FR) was conducted using an experimental split sample cluster design. This was the first comprehensive refresher training that had been conducted since the 1990s. To maintain consistent year-to-year comparisons, Census and BJS implemented the experiment in a manner that isolated the effects of training without contaminating the annual 2011 estimates. An issue with the training of interviewers is that rape and sexual assault is only one type of victimization among many on the NCVS questionnaire, and it is rarely reported. However, questions that ask about this topic require special sensitivity from interviewers. The NCVS refresher training for field representatives was 1.5 days in length, during which the NCVS screener was discussed for only 2 hours. Moreover, that 2 hours included not only discussion of methods of asking sensitive screener questions but also many other issues. The trainers provided a number of useful sugges- tions to follow when interviewing victims of sexual or other sensitive crimes (see Box 8-2). The panel applauds this refresher training, which covered many facets of the NCVS. However the limited time devoted to training on asking sensitive questions, the need for privacy in asking those sensitive questions, and a fuller understanding of sexual victimization did not get the emphasis that is needed in order to ensure complete reporting. And even if adequate training could be provided, such training would not be reinforced through the day-to-day survey process because the NCVS is a general- purpose criminal victimization survey, and an interviewer very infrequently gets a positive response on questions about rape and sexual assault. CONCLUSION 8-9 The current training for National Crime Vic- timization Survey interviewers with regard to the subject of rape and sexual assault is insufficient to ensure complete and accurate responses. Moreover, because interviewers only infrequently encounter reports of these crimes, they do not get the opportunity to practice and reinforce the training that they do receive.

OCR for page 127
150 ESTIMATING THE INCIDENCE OF RAPE AND SEXUAL ASSAULT BOX 8-2 Suggestions When Interviewing Victims of Sexual or Other Sensitive Crimes Provided to National Crime Victimization Survey (NCVS) Interviewers During Refresher Training in 2011 •  e sensitive to what the respondent is telling you; however, keep the B respondent on track because some respondents may have the tendency to tell you more than what is being asked. • Be respectful and polite to victims, even to those who do not want to talk. • Avoid unnecessary pressure. Be patient. •  e supportive and let victims express their emotions, which may include B crying or angry outbursts. • Be careful not to appear overprotective or patronizing. •  void judging victims or personally commenting on the situation. A •  emind the respondents periodically, if necessary, about the importance R of their responses. •  eassure the respondent that knowing the prevalence of the form of R violence they are experiencing will be useful to expand efforts to iden- tify ways to help victims of that type of crime and to hold perpetrators accountable. •  upply the respondent with a copy of the NCVS-110 Fact Sheet bro- S chure (show example), which contains several hotline numbers that they may find helpful to call if the person asks for assistance. Make sure you have an ample supply of the Fact Sheet to provide to respondents when needed. SOURCE: U.S. Census Bureau and Bureau of Justice Statistics (2011a, pp. 2-33-2-34). Monitoring Monitoring of interviews is a method of ensuring quality control over the interview process, improving interviewer performance and improving data quality. It is a standard practice for central location telephone inter- views. It is more limited with field interviewing and with decentralized telephone interviewing. Thissen et al. (2009, p. 2) provide an overview of the classic techniques of monitoring in field data collection and their advantages and disadvantages. These techniques include in-person observa- tion; post-interview discussions with interviewers; verification contact, by telephone or in-person reinterview; review of response data and timers; and tape recording during interview. The NCVS is collected with a combination of field and telephone interviews conducted by the field interviewers. Thus, there is currently no

OCR for page 127
POTENTIAL SOURCES OF ERROR 151 centralized telephoning that is monitored on a continuing basis. Previously, a periodic NCVS quality control recontact had been conducted as part of interviewer evaluations, but this process was suspended over several years (along with refresher training) because of budget constraints. In this process, the reinterviewer verified several things (U.S. Census Bureau and Bureau of Justice Statistics, 2010): • the correct sample units were interviewed, • the listing sheets were completed or updated properly, • the household screens were completed or updated properly, • all screen questions were asked and all answers recorded, and • any noninterviews were classified accurately. The research on interviewer monitoring came mostly from centralized telephone interviewing because field interviewing relied almost exclusively on a “verification contact” until the introduction of computer audio-re- corded interviewing (CARI) in 1989.9 CARI is a laptop computer soft- ware application that unobtrusively digitally records the audio exchange between an interviewer and a respondent during interviews. The software is programmed so that individual questions or sections will automatically be recorded for quality review. After the interview is completed, the audio files are downloaded and transmitted to the central program staff for cod- ing and review. The CARI technology not only records interviewer-respondent verbal interactions but also ensures that the description of the interview is not biased: 1. It records unobtrusively because the microphone is built into the computer; 2. The microphone is activated at the appropriate points by the computer program not the interviewer, which not only reduces intrusiveness but also makes the recording independent of the in- terviewer; and 3. The digital recordings are exported as audio files of individual questions that can be sorted by question, respondent, or inter- viewer, which permits rapid and efficient purposive review. In a feasibility report, Biemer et al. (2000, p. 1) identified the range of applications, including 9  The method was developed and pioneered by RTI. It was first deployed in a national field study in the 1989 National Survey of Child and Adolescent Well-Being.

OCR for page 127
152 ESTIMATING THE INCIDENCE OF RAPE AND SEXUAL ASSAULT • detecting gross departures from appropriate procedures, including interview fabrication; • evaluating interviewer execution of interviewing guidelines, which permits corrective feedback for future interviews as well data qual- ity control for existing interviews; • identifying questionnaire problems and data collection difficulties using interviewer-respondent interaction coding; and • collecting verbatim responses to open-ended questions in an interview. These applications include all of the goals of monitoring within a cen- tralized telephone-interviewing facility with the exception of immediate feedback. The researchers (Biemer et al., 2000) found that the CARI-based ap- proach was less expensive than other traditional approaches for interview verification of field interviews: 23 percent less expensive than face-to-face follow-up and 32 percent less expensive than a telephone- and postcard- based approach. However, these analyses ignored the system development costs of a functional CARI system. It is particularly noteworthy that the CARI system was piloted on an extremely sensitive survey, the National Survey of Child and Adolescent Well-Being, which is a panel survey of 6,700 children who are the subjects of reports of abuse and neglect. The study required both signed and audio- recorded consent to use CARI. Consent to use CARI was obtained in 85 percent of the caseworker interviews, 83 percent of the caregiver interviews, and 82 percent of the child interviews. CONCLUSION 8-10 Monitoring of interviewers is important to ensure quality and to identify areas in which an individual interviewer needs reinforcement and areas in which improved training is needed. The monitoring method used in the National Crime Victimization Survey, periodic reinterviews of selected respondents, is not adequate to ensure interviewing quality.