identify different subgroups for which the methods need to be different. For example, some households may have electronic records of nearly all expenditures, such as on credit card and bank account statements; others may keep paper receipts for expenditures; a third group may use a combination of methods; and yet a fourth group may not maintain sufficient records in any form. Among those who keep electronic records, some may even use specialized software that serves as a single repository of expenditure data, such as tax-related software packages.

Within a household, some may rely on a single person to be responsible for all expenditure records, while other households divide this responsibility by the type of expenditure or by the person who made the expenditure. These examples are certainly an oversimplification of what is invariably a complex and multifaceted recordkeeping phenomenon, but studies are needed to improve understanding of how households and individuals within those households keep expenditure records today.

Collecting data on a reduced set of 96 expenditure categories. Much of the burden in the CE surveys stems from the data requirements imposed on the surveys. It is imperative to conduct a study to investigate designs that minimize the number of questions and that reduce burden on respondents, in order to acquire accurate data. Both Design B and Design C require this research. The instrument can be reduced in a number of ways, but at a minimum, an evaluation is needed of the impact of collecting 96 categories of expenditures instead of the more detailed 211 expenditure categories now collected. A preliminary evaluation of the impact on the CPI, for example, can be conducted using extant data.

Use of incentives. The U.S. population has become more reluctant to participate in surveys (e.g., Groves and Couper, 1998; Stussman, Dahlhamer, and Simile, 2005), and incentives can help mitigate the effect on nonresponse. Key, however, is how incentives are incorporated into the survey design, if they are included. The panel did not venture to recommend a particular design, as this choice can only be informed through experimentation. Aspects that may warrant experimental manipulation include the structure (e.g., prepaid versus promised, household versus individual), timing (e.g., prior, during, or after completion of the supported journal), form (e.g., cash, and if cash, whether it is electronic transfer), criteria for payment (e.g., a certain level of supported journal completeness), amounts, and potential use of differential incentives (e.g., lower compliance groups, based on burden such as from the number of people in the household). More detail on this topic is covered earlier in this chapter under “Guidelines for the Use of Incentives.”



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement