National Center for Health Statistics (NCHS) research supports a very active survey management activity designed to reduce nonresponse. As reported by Jennifer Madans of NCHS at the panel’s workshop, the National Health Interview Survey (NHIS) research focuses on issues of nonresponse, with much of the research making use of paradata collected as part of the survey. NCHS uses a so-called contact history instrument, audit trails of items and interview times using the Blaise survey management platform, and analysis of the front and back sections of the survey instrument. The issues NCHS has been investigating include differences arising from reducing the length of the field period and the effort that the interviewer makes and the trade-offs between response rates and data quality. The research has found that the loss of high-effort households had minor impacts on estimates. The research also found that respondent reluctance at the first contact negatively impacts data quality. Interviewer studies have found that pressure to obtain high response rates can be counterproductive in that the pressure often leads to shortcuts and violations of procedures. These investigations have helped to develop new indicators to track interview performance in terms of time, item nonresponse, and mode.

The National Survey of Family Growth has focused on paradata-driven survey management. The survey collects paradata on what is happening with each individual case. These paradata are transmitted every night, analyzed the following day, and used to manage the survey. The paradata measures include interviewer productivity, costs, and response rates by subgroup. They emphasize sample nonrespondents, the use of different procedures (including increased incentives), and identification of cases to work for the remainder of field period.

To measure content effects the National Immunization Survey (NIS) has run several controlled experiments, along several lines of inquiry. In one experiment, NIS used such tools as an advance letter, screener introduction, answering machine messages, and caller ID (known name versus 800 number). Other experiments involved scheduling of call attempts by type of respondent and nonrespondent; incentives (prepay plus promised) to refusals and partials; propensity modeling for weighting adjustments; dual frame sampling (landline plus cell phone RDD samples) and oversampling using targeted lists; and benchmarking results against the NHIS. Findings thus far include that the response rate showed differences when the content and wording of the screener introduction were varied; advance letters, which were improved for content, readability, contact and callback information, and Website information, improved participation; a legitimate

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement