Click for next page ( 48


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 47
4 The Panel’s Investigation into the Issues with the CE In 2009, the Bureau of Labor Statistics (BLS) initiated a systematic, comprehensive study of the challenges faced by the Consumer Expenditure Surveys (CE) with the goal of redesigning the existing surveys to reduce measurement error. BLS states that the mission of this venture, known as the Gemini Project, is to redesign the Consumer Expenditure surveys (CE) to improve data quality through a verifiable reduction in measurement error, particularly error caused by underreporting. The effort to reduce measurement error will combat further declines in response rates by balancing any expected benefits of survey design changes against any potential negative effects on response rates. Any improvements introduced as part of the Gemini Project should not increase budgetary burden, but instead, should remain budget neutral. (Bureau of Labor Statistics, 2011e, p. 1) Since the beginning of the Gemini Project, BLS has undertaken a number of information-gathering meetings, conference sessions, forums, and work- shops to aid in its mission. All of these have provided valuable information for the Panel on Redesigning the BLS Consumer Expenditure Surveys in its current task, and many of the papers presented at them are cited in this report. These events included the National Bureau of Economic Research’s Conference on Improving Consumption Measurement (July 2009); Survey Redesign Panel Discussion, cosponsored by the Washington Chapter of the American Association for Public Opinion Research (DC-AAPOR) and the Washington Statistical Society (January 2010); Data Capture Technology Forum (March 2010); AAPOR Panel on Respondent Record Use (May 47

OCR for page 47
48 MEASURING WHAT WE SPEND 2010); Data User Needs Forum (June 2010); and CE Methods Workshop (December 2010). More information can be found about these events, plus copies of papers presented at them, on the BLS website (see http://www. bls.gov/cex/geminimaterials.htm). Additionally, BLS has conducted internal research in support of the Gemini mission and has contracted targeted re- search from the private sector. The panel commends BLS on its multiyear, systematic review of the methodology used in the CE. Building on the work of the Gemini Project, the panel investigated the opportunities and drawbacks related to the CE. As described in this chapter, their additional investigation included feedback from CE data users, panel members’ reactions when they assumed the role of survey respondents, and a workshop to learn more about other large-scale household surveys. Redesign options developed by two outside groups in response to a Request for Proposal also formed an important part of the panel’s investigations, and the chapter concludes with some of the main points and discussions elicited by these two options. FEEDBACK FROM DATA USERS The panel was diligent in reaching out to data users and trying to understand the many uses of the CE. Many of those uses are outlined in Chapter 2, including input into calculation of the Consumer Price Index (CPI), development of government programs, and as the basis for research and analysis. Several panel members are themselves regular users of the CE microdata. The panel reviewed a broad set of published research that used the CE as a source of information. As noted above, members studied the papers from the BLS 2010 Data User Needs Forum. They also attended conferences held by the National Bureau of Economic Research and held a session with microdata users at the 2011 CE Microdata Users’ Conference. Finally, the panel spoke one-on-one with many users of the CE data. The panel studied the complexities of the CPI program and how the CE supports those important indices. Considerable detail on this topic is provided in Chapter 2, in the section “CE Data Provide Critical Input for Calculating the Consumer Price Index.” From their investigation, the panel made the following two conclusions. C  onclusion 4-1: The CPI is a critical program for BLS and the nation. This program requires an extensive amount of detail on expenditures, at both the geographic and product level, in order to create its various indices. The CPI is the current driver for the CE program with regard for the level of detail it collects. The CPI uses over 800 different expen- diture items to create budget shares. The current CE supplies data for

OCR for page 47
THE PANEL’S INVESTIGATION INTO THE ISSUES WITH THE CE 49 many of these budget shares. However, even with the level of detail that it currently collects, the CE cannot supply all of the budget shares used by the CPI. There are other data sources from which the CPI currently generates budget shares. C  onclusion 4-2: The CPI does not utilize the panel nature of the current CE. Instead the national and regional estimates employed by the CE assume independence of households between quarters on the Interview survey, and independence between weeks on the Diary survey. As discussed in the Chapter 2 section “The CE Provides Data Critical in Administering Government Programs,” the CE is used by a number of federal agencies to administer portions of their programs. To learn more details about this particular use of the CE, the panel held in-depth con- versations with staff at these agencies. A summary of those conversations appears in Appendix C. From their investigation, the panel makes the fol- lowing conclusion. C  onclusion 4-3: The administration of some federal programs depends on specific details collected from the CE. There are currently no other available sources of consistent data across years for some of these programs. A third large group of users of the CE data are economic researchers and policy analysts from academic institutions, government agencies, and private organizations. These users work with tabular estimates produced by the BLS, and increasingly with microdata files from the CE. The panel talked with a number of these data users, researched the types of questions that their analyses addressed, and the characteristics of the CE that were important for those analyses. Many examples are provided in the Chap- ter 2 section, “CE Data: A Cornerstone for Policy Analysis and Economic Research.” Much of this work is geared to understanding household behavior and how households adjust their consumption in response to changes in circumstances. These changes may be affected by personal events such as a change in income, marriage, loss of a job, retirement, the birth of a child, or the onset of a disability. Government program changes (such as tax reform, adjustments in minimum wage, or health care legislation) can also impact household behavior. For data to be useful in this endeavor, users say it is necessary to have panel data with at least two observations. Many analysts indicate the strong advantage to having a third observation. A related issue is the length of each panel period. Data collected over a short period, such as in the cur-

OCR for page 47
50 MEASURING WHAT WE SPEND rent two one-week Diary surveys, are able to answer questions related to how households respond to events that happen relatively frequently, such as receipt of monthly Social Security benefits (e.g., Stephens, 2003). However, a wide range of questions requires examining the same household both before and after a less frequent event such as a tax rebate, a job loss, or a divorce. These questions are more difficult to address with data collected over a short time period unless the sample size is rather large. Regardless of the period over which expenditure is measured, an impor- tant complement is relevant household information over the same interval. In order to examine whether changes in household circumstances lead to changes in household consumption, these circumstances must be measured during the same period. The principal variables of interest are income, employment, retirement, disability, and marital status. When panel data have been lacking, researchers have been able to cre- ate panels by using “synthetic cohorts.” The idea behind synthetic cohorts is that in place of following the behavior of the same individuals over time, researchers can create a panel by modeling individual household activity based on data from similar groups of households. Using these synthetic cohorts, researchers can examine the relationship between changes over time. While synthetic cohort data are more difficult to work with, they may prove useful for answering some questions. However, for a number of policy questions, synthetic cohort data do not provide a useful tool. Thus, the following conclusion is made regarding use of the CE for research and analysis purposes. C  onclusion 4-4: Economic researchers and policy analysts generally do not use CE expenditure data at the same level of detail required by the CPI. More aggregate measures of expenditures suffice for much of their work. However, many do make use of two current features of the CE microdata: an overall picture of expenditures, income, and household demographics at the individual household level; and a panel component with data collection at two or more points in time. PANELISTS’ INSIGHT AS SURVEY RESPONDENTS Panel members wanted to gain firsthand insight into the CE from the viewpoint of a respondent, so approximately three-quarters of panel mem- bers were interviewed by a Census field representative. Most experienced the Interview survey, one kept the Diary, and several did both. Box 4-1 provides some reactions of panel members to this experience. During the process, panel members asked their own questions of the field representa- tives. Thus, the interview experience for the panel was partly trying to recall and answer specific questions about their own expenditures, and partly

OCR for page 47
THE PANEL’S INVESTIGATION INTO THE ISSUES WITH THE CE 51 BOX 4-1 Reactions of Panel Members Following Their Interviews with Field Representatives (FRs) “The FR said I was the first respondent to EVER consult paper and electronic records extensively.” “The FR interviewed me extremely quickly—fast talker. This seems to be the solu- tion to respondent burden: get through as fast as possible.” “I got a strong sense of how easy it becomes to say ‘no’ to a category simply because saying ‘yes’ so clearly leads to more trouble.” “After the interview, the FR told me about the suspicion of government and con- cerns about intrusiveness that the FR regularly encounters, and it is much more intense and extreme than I had expected.” “The diary appears to have some very significant strengths compared to quarterly recall. I did not see the immediate problems of being unable to respond to ques- tions, as I experienced when doing the CE quarterly interview. This is a much easier task, even though at first blush it seems like keeping a diary for two weeks was going to be extraordinarily difficult.” SOURCE: Committee on National Statistics Panel on Redesigning the Consumer Expenditure Survey (2011). trying to understand the overall nature of the interviews as experienced by others. The field representatives made a number of comments to panel members about their “typical” respondents and what they considered nor- mal respondent behavior. The panel believes that this entire process brought realism into their discussion of the cognitive issues and potential solutions (Committee on National Statistics Panel on Redesigning the Consumer Expenditure Survey, 2011). HOUSEHOLD SURVEY PRODUCERS WORKSHOP: DESCRIPTION AND INSIGHTS Many of the problems and issues facing the CE are also faced by other large household survey programs, and the panel wanted to leverage the work done on these surveys toward solutions for the CE. In this endeavor, the panel planned and held a Household Survey Producers Workshop in June 2011 in Washington, DC. (The agenda for the workshop appears in Appendix E.) The panel would like to extend its appreciation to the present- ers at this workshop for the insights they provided.

OCR for page 47
52 MEASURING WHAT WE SPEND BOX 4-2 Sessions at the Household Survey Producers Workshop Session 1: Alternative Ways of Measuring Consumer Expenditures—International Experiences Session 2: Designs That Add Flexibility in Data Collection Mode Session 3: Designs That Effectively Mix Data from Multiple Surveys and/or E ­ xternal/Administrative Data to Produce Estimates Section 4: Designs That Effectively Mix Global and Detail Information to Reduce Burden and Measurement Error Session 5: Designs That Use “Event History” Methodology to Improve Recall and Reduce Measurement Error in Recall Surveys Session 6: Diary Surveys That Effectively Utilize Technology to Facilitate Re- cordkeeping or Recall NOTE: See Appendix E for the full agenda of the workshop. The program for this workshop was built around six topics (see Box 4-2), each of which was specific enough to inform the panel’s redesign deliberations yet broad enough to be able to present different perspectives of the topics. After different presentations on a topic, one member of the panel discussed the insights that these presentations had for the CE rede- sign. A summary of the main points raised in the six sessions follows. Session 1: Alternative Ways of Measuring Consumer Expenditures—International Experiences The purpose of this session was to have representatives from other countries talk about how they collect consumer expenditures, the issues they face, and their approach to these issues. The panel was looking for dif- ferences and similarities that might inform redesign options for the U.S. CE. Dubreuil et al. (2011) discussed how Statistics Canada redesigned its consumer expenditure survey. The new design of Canada’s Survey of Household Spending looks similar to the current CE in the United States. It uses a combination of a recall interview and 14-day diary for each selected household, with varying recall periods for different expense items. The previous Canadian design incorporated a “balance edit,” a feature that a number of users of the CE would like to see incorporated in the CE rede- sign. The Canadian redesign no longer includes this feature. Horsfield (2011) discussed the UK Living Costs and Food Survey. It is

OCR for page 47
THE PANEL’S INVESTIGATION INTO THE ISSUES WITH THE CE 53 a relatively short household survey to collect regular expenditures such as rent and mortgage payments, along with retrospective information on cer- tain large, infrequent expenditures such as those on vehicles. The program predominantly uses a Diary survey, as each individual aged 16 and over is asked to keep diary records of daily expenditures for two weeks. Children (aged 7–15) complete a simplified diary. Household members receive in- centives for completing the diary: 10 pounds ($15.68) per adult, 5 pounds ($7.84) per child. Borg (2011) discussed consumer expenditure surveys in Europe and the European Union’s efforts to harmonize survey results. The EU countries all have their own expenditure surveys carried out under the responsibility of their national statistical offices. These surveys are generally periodic rather than annual. The primary purpose of these surveys is to produce the budget shares for the national consumer price indices, although there has been an increasing use of the information at both the national and EU levels. There remain some comparability issues among these surveys. Session 2: Designs That Add Flexibility in Data Collection Mode One of the primary reasons for the CE redesign is the need to update data collection strategies to create greater flexibility in the data collection mode. The Interview survey is conducted in person, with a fallback to tele- phone interviewing when a personal visit is not feasible. The Diary survey is dropped off and picked up in person, and the diary information is collected on paper forms. This session provided examples of how other surveys are incorporating response flexibility or newer data collection methods. Smyth presented results from Olson, Smyth, and Wood (2011), an ex- periment within the ongoing Nebraska Annual Social Indicators Survey that allows the respondents to choose their mode preference. The experiment then uses the respondent’s preferred mode of data collection and tests to see if this treatment makes a difference in response. In this limited experiment, they found that response rates were higher for those being surveyed in their preferred mode. They also found that Web survey response rates were lower than those with mail and phone contacts across all preference groups. However, they found that results changed within a mixed mode framework. The Business Research & Development Innovation Survey (BRDIS), conducted by the Census Bureau for the National Science Foundation’s Na- tional Center for Science and Engineering Statistics, is the nation’s primary source of information on business R&D expenditures and the workforce. Hough (2011) reported that unlike its predecessor, which was sent to a single respondent within a company, the new BRDIS questionnaire is struc- tured to allow and encourage different experts within a single business to provide responses in their areas of expertise. There are both paper and elec-

OCR for page 47
54 MEASURING WHAT WE SPEND tronic versions of questionnaires. They have also developed an online tool- kit to assist business respondents that includes spreadsheets, fillable PDFs, and personalized support by an account manager. Clearly, establishment surveys are different from household surveys in many ways, but there are similarities from which to extract ideas, such as multiple mode options and a toolkit for respondents. Different households and members of the same household might have a different comfort level with different collection modes. A key point from this presentation is that “one size” does not fit all respondents. The BRDIS recognizes that point up front and designs it into its methodology. Another point is the toolkit to further assist respondents. Wine and Riccobono (2011) discussed the National Postsecondary Student Aid Study (NPSAS), a survey conducted by RTI International for the National Center for Education Statistics that mixes multiple sources of data and data collection modes with incentives to obtain and keep student respondents. NPSAS data come from multiple sources, including institutional records, government databases, and student interviews. Detailed data on par- ticipation in student financial aid programs are extracted from institu- tional records. Data about family circumstances, demographics, education and work experiences, and student expectations are collected from stu- dents through a web-based multimode interview (self-administered and computer-assisted telephone interviews [CATI]). (National Center for Edu- cation Statistics, 2012) A tailored incentive program is designed into the process to encourage early response. Session 3: Designs That Effectively Mix Data from Multiple Surveys and/or External/Administrative Data to Produce Estimates Some of the information collected on the CE may be available in ad- ministrative records or collected on other government surveys. This session highlighted surveys that, while collecting large quantities of information themselves, also utilize administrative records and/or combine data from other survey data collections to reduce the overall burden of the survey or to improve the overall quality of data. Machlin (2011) described how the Medical Expenditure Panel Survey (MEPS) matches survey data to health records, combining them with in- formation collected from household members and their medical providers. Upon completion of the household interview and obtaining permission from the household survey respondents, a sample of medical providers are contacted by telephone to obtain information that household respondents cannot accurately provide. This part of the MEPS is called the Medical

OCR for page 47
THE PANEL’S INVESTIGATION INTO THE ISSUES WITH THE CE 55 Provider Component (MPC), and information is collected on dates of visit, diagnosis and procedure codes, charges, and payments. The Pharmacy Component (PC), a subcomponent of the MPC, collects drug detail infor- mation, including National Drug Code (NDC) and medicine names, as well as date(s) prescriptions are filled, sources, and amounts of payment (Agency for Healthcare Research and Quality, 2012). O’Brien (2011) discussed the Residential Energy Consumption Survey (RECS), which collects a multitude of information on houses, appliances, and home energy usage. It collects utility records from energy suppliers in lieu of self-reports from respondents. As part of this process, the interviewer asks household respondents to name their energy suppliers and to produce a bill from each supplier. The interviewer uses a portable scanner to scan in the bills. The Energy Information Agency then contacts the energy suppliers to obtain records for the sampled household unit for the previous year (U.S. Department of Energy, 2011) Schenker and Parsons (2011) discussed combining data from multiple surveys to improve quality and reduce burden within the survey program of the National Center for Health Statistics (NCHS). They provided four examples: • Combining information from the National Health Interview Survey (NHIS) and the National Nursing Home Survey to obtain more comprehensive estimates of the prevalence of chronic conditions for the elderly; • Using information from the National Health and Nutrition Exami- nation Survey (NHANES) to improve analyses of self-reported data on the NHIS; • Combining information on the Behavioral Risk Factor Surveillance System with the NHIS to enhance small-area estimation; and • Creating links between various NCHS surveys and administrative data sources such as air quality data available from the Environ- mental Protection Agency, death certificate data from the National Death Index, Medicare enrollment and claims data from the Cen- ters for Medicare & Medicaid Services, and benefit history data from the Social Security Administration. Session 4: Designs That Effectively Mix Global and Detail Information to Reduce Burden and Measurement Error This session highlighted surveys that, while collecting large quantities of information, do so using design strategies and questionnaire modules that avoid asking every respondent for all details on each contact. Aune (2011) discussed the Agricultural Resource Management Survey,

OCR for page 47
56 MEASURING WHAT WE SPEND an expense and income survey of farming establishments conducted by the National Agricultural Statistics Service. It is an annual survey that collects detailed information related to the farming enterprise and, to a lesser ex- tent, to the farm household. This survey has multiple modules or versions, with sample units assigned to a specific version during the selection process. Most versions are designed for personal enumeration, but one is designed for mail/Web collection. For a given expense item (such as fuel expenses), some versions will ask only the global expense item (total spent on fuel of all kinds) and others will ask a detailed breakout of that expense item (amount spent on gasoline, diesel, propane, etc.). Regardless of the version and mix of global/detail questions, all data are combined in summary esti- mates and contribute to the state, regional, and national estimates. Fields (2011a) discussed the current structure of the Survey of Income and Program Participation (SIPP) and its use of both “core” and “topical” questionnaire items. The SIPP follows households for multiple waves. Core questions are asked in all waves, such as the global item “total income.” Topical questions are those that are not repeated in each wave. Topical modules are designed to gather specific information on a wide variety of subjects. Some topical modules cover items such as assets and liabilities, real estate property, and selected financial assets. In some instances, the topical questions are intermixed with core questions in the interview to make the questionnaire flow more smoothly. Gentleman (2011) discussed two alternatives for asking questions about the entire family in the National Health Interview Study. The first alterna- tive asks a global question “does anyone in the family. . . .” An alternative questionnaire goes through the family roster and asks individual questions for each family member. The NHIS is also used as a screening vehicle for follow-on surveys, with many detailed questions saved for those follow-on surveys. One result from their experiments on screening questions showed that respondents gave fewer “yes” answers to filters as they learned that such answers led to additional questions. Session 5: Designs That Use “Event History” Methodology to Improve Recall and Reduce Measurement Error in Recall Surveys This session highlighted surveys that utilize “event history” method- ology to improve the quality of recalled information. The Panel Study of Income Dynamics (PSID) was the first major survey to implement “event history” methodology to improve the ability of respondents to recall infor- mation. Stafford (Beaule and Stafford, 2011) discussed the implementation of this methodology in the PSID, which has been a prototype for other surveys. They conducted a number of methodological studies as they de- veloped this methodology.

OCR for page 47
THE PANEL’S INVESTIGATION INTO THE ISSUES WITH THE CE 57 Fields (2011b) discussed a newly redesigned SIPP that uses event his- tory methodology, pulling from the experiences of the PSID. The SIPP staff believe they may be able to use a one-year recall period as effectively and accurately with this new methodology as the current design, which uses a four-month recall. The new design is scheduled to be operational in 2014. This presentation discussed the implementation of “event history” method- ology and presented what has been learned so far with the pilot program. Session 6: Diary Surveys That Effectively Utilize Technology to Facilitate Recordkeeping or Recall Newer technology, such as the Web, smart phones, and portable scan- ners, has opened possibilities for diary surveys. This session highlighted surveys that utilize this newer technology to field innovative diary-type surveys. The National Household Food Acquisition and Purchase Survey (FoodAPS) is a new pilot survey sponsored by the U.S. Department of Ag- riculture designed with an innovative approach to a food diary. Cole (2011) discussed the survey, which collects information on food sources, choices, quantities, prices, timing of acquisition, and nutrient characteristics for all at-home and away-from-home foods and beverages. It also collects house- hold information that may influence food acquisition behaviors. The pilot uses color-coded booklets, portable scanners for receipts, regular telephone contact to encourage diary-keeping, and incentives as part of the data col- lection process (U.S. Department of Agriculture, 2011). Kizakevich (2011) discussed personal diary and survey methodologies for health and environmental data collection used by RTI International. Among these examples were • PFILES, a real-time exposure-related diary of product use and dietary consumption in the context of activity, location, and the environment. It uses Pocket PCs with headsets for use by respon- dents, who record survey responses and even take pictures of their environment; • Personal Health Monitor for use by patients suffering from post- traumatic stress disorder and mild traumatic brain injuries to help clinicians monitor patients’ status while observing symptoms and medication usage within the context of daily activities and environ- mental factors; and • BreathEasy, an Android App, which allows a daily assessment of asthma triggers, health, and ventilation.

OCR for page 47
58 MEASURING WHAT WE SPEND Bailey (2011) discussed Nielsen Life360 Program, which uses a “digital ethnography” approach to measure attitudes, preferences, and behaviors of the targeted population using mobile phone surveys, photography, Internet- based journals, video cameras, and Web surveys. The specially equipped smart phone prompts respondents to complete a short survey on an hourly basis in addition to capturing an image using the built-in camera as a pic- ture description of their surroundings and activities in real time. Summary of the Workshop The panel found the workshop presentations to be highly informative and to provide important input into panel deliberations. A number of key points emerged from the prepared remarks that discussants delivered dur- ing the workshop, as amplified in subsequent discussion among panelists. First, the international comparisons demonstrate that concerns about data quality and burden that have led to the need for a redesign of the CE are not unique to U.S. data collection efforts, although the size of and variability among the U.S. population present particular challenges. The alternate methods that the panel observed from other countries made clear that a bounding interview is not a universal method, and that it is plausible to rethink this aspect of CE administration. It was also clear to the panel that, although the methods and approaches from other nations have many strengths, they also have their own challenges, and simple wholesale adop- tion of those methods is unlikely to be a panacea for improving the CE. Second, adding new modes of data collection needs to be done thought- fully, attending carefully to whether adding new modes or providing re- spondents with a choice of mode increases data quality, reduces respondent burden, or reduces nonresponse sufficiently to be worth the design, opera- tional, and analytic costs. As the Smyth presentation in session 2 illustrated, the scientific community is not yet at a point to fully understand why par- ticular modes work for different respondents. Third, while it is quite attractive to consider replacing or supplement- ing respondent-reported data with data from other sources (administrative records, data from other surveys) to reduce respondent burden and admin- istrative costs, this is not as straightforward an enterprise as it might seem. The hurdles are notable enough—from mode and questionnaire differences, to sampling and weighting incompatibilities, privacy and confidentiality issues, linkage difficulties, increased agency efforts, data sharing difficul- ties, and lack of knowledge of costs—that it does not seem plausible to the panel that alternate sources could suffice in the short term. There is also considerable concern about whether external data would be consistently available over time. Fourth, the panel was impressed by efforts in other U.S. surveys to

OCR for page 47
THE PANEL’S INVESTIGATION INTO THE ISSUES WITH THE CE 59 streamline data collection and rethink what kinds of general and specific information need to be asked of respondents. Although the immediate ap- plications to the CE of the particular approaches described at the workshop are not entirely clear, and although much remains to be understood about the relationship between survey length and respondent burden, the panel’s subsequent deliberations and proposals were influenced by such efforts. Fifth, whether or not event history methods are the only or best way to stimulate all respondents’ recall, the panel took note of the insight that emerges from studies of alternative interviewing methods. A redesigned CE needs to go as far as it can to accommodate respondents’ natural ways of thinking about and recalling their expenditures, rather than asking respon- dents to conceive of their expenditures from the researcher’s perspective. More broadly, assuming that respondents can recall purchases accurately without consulting records is problematic, and a redesigned CE needs to promote the use of records far more than current methods do. Finally, the panel took very serious note of the opportunities for us- ing new technologies to facilitate more direct and in-the-moment self- administered reporting of expenditures, as well as for passive measurement of expenditures. It will be important for a CE redesign to make as much use of these opportunities as feasible, and to start a new forward-thinking mode of research and production that continually assesses the changing technological landscape and prepares as much as possible for changes be- fore they happen. REDESIGN OPTIONS WORKSHOP: DESCRIPTION AND INSIGHTS In order to elicit a broader perspective on possible solutions to the CE’s problems, the panel sought formal input from organizations with experi- ence in designing complex data collection methods. It is in this context that the panel initiated a Request for Proposal (RFP) and competitively awarded two subcontracts, one to Westat (project leader: David Cantor) and the second to a consortium from the University of Wisconsin–Milwaukee, University of Nebraska–Lincoln, and Abt-SRBI (Nancy Mathiowetz, proj- ect leader; Kristen Olson; and Courtney Kennedy). The Statement of Work (see Appendix D) required the subcontractors to produce a comprehensive proposal for a survey design, and/or other data acquisition process, that collects the data required for the primary uses of the current CE while ad- dressing the following issues: • Underreporting of expenditures • Fundamental changes in the social environment for collection of survey data

OCR for page 47
60 MEASURING WHAT WE SPEND • Fundamental changes in the retail environment (e.g., online spend- ing, automatic payments) • The potential availability of large amounts of expenditure data from a relatively small number of intermediaries such as credit card companies • Declining response rates at the unit, wave, and item level The full reports from Mathiowetz, Olson, and Kennedy (2011b) and Westat (2011c) are available online, and the panel summarizes them in this chapter. The panel would like to commend both subcontractors on the reports they submitted. Both designs were innovative and well thought out. The time frame was very short for completing this contract, and both groups met the challenge. Their work provided very valuable input into the panel’s work from specific design options, use of technology, and review of relevant literature. The panel used their research and ideas extensively. The panel hosted a Redesign Options Workshop on October 26, 2011, and an informal roundtable on October 27 to facilitate a public discussion of the two proposals and their relative merits in regard to the current CE. An agenda for the workshop is in Appendix F. Kulka (2011) discussed both reports with a focus on the cognitive issues related to the CE, and Bowie (2011) talked about issues relative to implementing major changes in a large ongoing survey. Data users also addressed the proposed redesigns from the perspective of their use of the CE data. A number of important insights arose from the discussion of these pro- posals. One concerned the amount of detail that is required for the CE. A strong opinion was offered that one cannot collect that quantity of detail without a lot of measurement error. An understanding of what constitutes a tolerable measurement error must be clear, followed by a move back to collecting data at a more aggregate level for the prescribed level of quality. Another important round of discussion concerned the amount of additional research that would be needed to be ready to field a newly redesigned CE survey. If the redesign includes fairly major changes, as did the two propos- als offered at the workshop, then a significant amount of targeted research will lie ahead. Most important, these discussions led BLS senior management to mod- ify their original charge to the panel. In this modified charge (see Appendix B), the panel is asked to view the Consumer Expenditure Survey (CE) Data Requirements (Henderson et al., 2011) as the mandatory requirements for the survey. The CPI data requirements document (Casey, 2010) was no longer a part of the mandatory requirements that the redesign would need to meet.

OCR for page 47
THE PANEL’S INVESTIGATION INTO THE ISSUES WITH THE CE 61 Redesign Proposal: Westat Westat’s proposed redesign (Westat, 2011b,c) focuses on three inter- related goals: (1) reducing respondent burden, (2) incorporating admin- istrative and personal record information, and (3) improving self-report methodology. It calls for greater reliance on records and less reliance on respondent recall. Key features of this proposal and a link to the full report are provided in Box 4-3. The Westat proposal continues from the base of separate diary and interview surveys, but implemented differently than in the current CE. It introduces the concept of a “data repository” and a separate Administrative Record Survey to obtain certain records directly from retailers, utilities, and mortgage companies. The authors discussed their deliberations concerning access to external data: Data obtained directly from retailers are likely to be more accurate than respondent-provided data are likely to be. Other federal surveys, such as the National Immunization Survey and the Residential Energy Con- sumption Survey, have employed administrative data to supplement and improve the quality of data reported by respondents. Conceivably, CE respondents could provide their loyalty card numbers to interviewers, who would then ask the retailers to provide the purchasing histories for those loyalty cards. This method would not be perfect; a consumer may some- times forget to give a loyalty card to the cashier or may lend the card to friends. Moreover, retailers do not routinely release purchasing histories. BOX 4-3 Key Features of the Westat Proposal •  eparate diary and interview surveys but implemented differently than the S current CE. •  ultiple diary-keepers within a household. M •  ata repository into which respondents can upload scanned receipts and D records. •  ata electronically extracted from receipts and records, and a Web survey D electronically generated to request missing information. •  wo recall interview surveys, one year apart. Variable recall periods used. T •  espondents contacted three months before recall interview and encouraged R to keep and scan receipts during three-month period. •  onsent requested to obtain expenditure records directly from retailers, utili- C ties, and mortgage companies. A separate administrative records survey to obtain those records. NOTE: Link to full report: http://www.bls.gov/cex/redwrkshp_pap_­­­­ westatrecommend. pdf and http://www.bls.gov/cex/redwrkshp_app_westatrecommend.pdf.

OCR for page 47
62 MEASURING WHAT WE SPEND The BLS might explore the feasibility of obtaining purchasing history data by contacting large retailers with loyalty card programs. Expenditure data is also potentially available from utility companies, rental agents, and lend- ers. (Westat, 2011c, p. xiii) For the Diary survey, each person age 14 and over in a sampled house- hold would be asked to report expenditure data for 14 days. Having multiple respondents minimizes concerns about proxy reporting. The re- spondents are given a variety of reporting options. They could use the cur- rent paper diary forms, mail in their receipts and records, or report data electronically. All respondents are asked to save and then supply receipts. A key component of the redesigned Diary survey is a “data repository” into which respondents upload various types of expense records. The repository system would extract purchase data from the uploaded records/receipts and generate a Web survey that would ask the respondent to supply any remaining information that the CE program needs about those purchases. Respondents who chose to report their data electronically would be given a portable scanner. Using specially designed software, they would e-mail files of their scanned receipts and other records of purchases to the data repository. Respondents would also be asked to download data files from various financial accounts and e-mail these files to the data repository. Respondents could opt to report their data by mailing in their receipts and financial statements, and staff would scan these receipts into the data repository. The authors discussed in their report the potential of asking respondents to supply financial records: Consumers today commonly make purchases using modes that leave an electronic record. When an electronic record exists, respondents poten- tially could provide the expenditure data by retrieving information about the purchase from a database, or by printing out a record of the purchase, rather than by trying to remember the details of the purchase or by find- ing a receipt. For example, soon after a consumer makes a purchase using credit card, debit card, check, electronic fund transfer, or PayPal, a record of the transaction appears in a file that can be downloaded from the web- site of a financial institution. When a consumer makes an online purchase, the vendor typically sends a confirmation email, or provides a confirmation page, that the consumer can print out. The CE program does not currently ask respondents to provide these electronic records of expenditures. Their potential role in the CE data collection process deserves attention. These records cannot provide all of the data required by the CE program, how- ever. They potentially offer a way to help respondents remember expendi- tures without a great deal of effort. (Westat, 2011c, p. 5)

OCR for page 47
THE PANEL’S INVESTIGATION INTO THE ISSUES WITH THE CE 63 Respondents would also be asked to provide consent to collect their purchasing history data directly from retailers. The authors discuss this recommendation, saying: If CE respondents provided their loyalty card numbers, and retailers were willing to release purchasing data, the CE program would have access to objective information about the respondents’ expenditures. Of course, this idea has some drawbacks. Consumers sometimes forget to provide their loyalty card to the cashier when they make a purchase. Some consumers may lend their loyalty cards to friends. Also, most retailers, including Walmart, have no loyalty card programs. (Westat, 2011c, p. 6) A field representative would monitor the respondent’s reporting activ- ity, increasing contact and assistance to those not reporting regularly. A telephone or personal visit to the household would be scheduled after 7 and 14 days: a telephone interview for those that have been providing the information on a regular basis and a personal interview for households that have not been providing the information. For the Interview survey, Westat proposes a change from the current data collection schedule. A new panel would enter the CE program each quarter, so that four panels would enter each year. The first wave of data collection for each panel would begin with a bounding interview, followed three months later by a recall interview. The second wave of data collection would start nine months after the recall interview. Westat (2011c, p. 53) indicated that this change is made to reduce both cost and burden: “The current design has a total of five in-person interviews per household, creat- ing significant cost and respondent burden. Reducing this number to three in-person interviews would substantially reduce this burden and may lead to greater cooperation, fewer dropouts, and better data quality.” At the start of the second wave, the household would receive a package via U.S. mail reminding them to resume data collection activities, includ- ing keeping receipts. If the household has changed, it would receive a per- sonal visit. Data collection would end with a recall interview three months later. Westat also proposes a change to the recall period, so that it varies by expense item (one-, three-, or 12-month recall). The proposal places a very strong emphasis on having households save receipts and use records. Respondents would also be asked to provide consent for collecting their ex- penditure history data directly from retailers, utilities, and mortgage com- panies. Respondents would be encouraged to scan receipts and records into the data repository as they receive them, rather than waiting for the field representative’s return interview. As with the Diary survey, the repository would generate a Web survey based on the information still needed about the receipt/record. The field representative would monitor the number of

OCR for page 47
64 MEASURING WHAT WE SPEND records/receipts coming in during the three-month period and contact by telephone households that were not turning in receipts regularly. The redesign also includes a separate Administrative Record Survey that would be developed to obtain records directly from retailers, utilities, and mortgage companies. The emphasis on obtaining data from records rather than the respondent’s memory is intended to improve data quality and reduce respondent burden. Westat estimated that the proposed diary redesign would cost ap- proximately 60 percent more than the current diary survey. This increase in cost is primarily attributable to having multiple diary-keepers within each household. Without a budget increase, the number of sampled households would have to be reduced accordingly, and the precision of the estimates would therefore also diminish. Westat estimates that the proposed inter- view redesign would cost approximately twice that of the current interview survey. The increase is attributable to the increased effort in contacting more households, an effect of reducing the number of panels. The new Administrative Record Survey contributes to cost increases reported for both surveys. The redesigned methods for the interview survey result in some increase in precision of the estimates, due to eliminating the within- household correlation across panel waves within the same year. This offset does not entirely make up for the increase in cost. The report provides a simulation of the effect on the precision of the estimates using one-, three- and 12-month reference periods (Westat, 2011c). Redesign Proposal: Mathiowetz, Olson, and Kennedy The Mathiowetz/Olson/Kennedy proposal (Mathiowetz, Olson, and Kennedy, 2011a,b) recommends a single integrated sample design, with two components: (1) a cross-sectional one-month diary, and (2) a panel component for which a household would complete the one-month diary for three different waves within the year. The proposed design makes extensive use of tablet computers, receipt scanners, and flexible memory “triggers.” Box 4-4 provides key elements of this proposal and a link to the full report. The design provides for active monitoring of the diary-keeping activities of household members, with interventions when this activity appears inad- equate. Their design minimizes the reliance on retrospective recall, elimi- nates the need to combine data from two distinct surveys, and provides an important panel component within the data structure. In discussing the advantages to their proposed design, the authors stated Our design addresses the issue of underreporting by minimizing reliance on retrospective reporting, promoting “real time” recording of all ex-

OCR for page 47
THE PANEL’S INVESTIGATION INTO THE ISSUES WITH THE CE 65 BOX 4-4 Key Features of the Mathiowetz/Olson/Kennedy Proposal •  iary only D •  ross-sectional sample households keep diary for one month. Panel com- C ponent keeps one-month diary three times during the year. •  ultiple diary-keepers within the household M •  se of tablet PCs with Internet connection to permit real time uploading of U data •  ngoing monitoring of uploaded data with feedback to household O •  se of memory triggers encouraged U NOTE: Link to full report: http://www.bls.gov/cex/redwrkshp_pap_­ btsrbirecommend. a pdf. penditures and payments, and emphasizing self reporting among all CU members. The use of a web-based diary, via web-enabled tablets, provides an efficient means by which each member of the CU can log on to his or her own personal diary to record expenditures. The flexibility and com- puting power of a tablet will allow CE staff to develop an instrument that minimizes burden (e.g., pick lists; scanning of receipts and barcodes; ease of selecting repeat purchase items) and facilitates consistency in reporting at the level of detail necessary for the CPI. We envision a data collection approach with the tablet that allows for the use of apps, integration with other technology, online help for the CU members, and real time moni- toring of diary entries by the CU. (Mathiowetz, Olson, Kennedy, 2011b, p. 11) Each adult (age 16 and older) member of the selected household would be asked to keep a 30-day diary, reporting expenditures “real-time” dur- ing that period. Younger children (aged 7–15) would be asked to keep a “mini diary” for that same time period. During an initial personal visit to the selected household, the field representative would collect demographic and socioeconomic data, including asking some global questions related to certain expenditures and annual income. The field representative would probe about regular monthly payments for housing and utilities, and any automatic payment schedules. The 30-day diary process would be explained with appropriate training on use of the diary tools. The authors provide their rationale for multiple diary-keepers within the household. With respect to multiple reporters per CU, the limited literature suggests that the use of multiple diaries per CU increases the reporting of expen- diture items and CU expenditures (Grootaert 1986; Edgar et al. 2006).

OCR for page 47
66 MEASURING WHAT WE SPEND If the source of the increasing discrepancy between CE and the Personal Consumption Expenditure data from the National Accounts is due to mea- surement error, then increasing self reports and minimizing recall periods are two well established means for improving data quality (Bound, Brown and Mathiowetz 2001). Furthermore, the use of technology, in which each member of the CU can log in to his or her individual diary with their own login and password, permits persons who make purchases that they would rather not have other members know about to answer confidentially (e.g., teenagers not wanting their parents to know about certain purchases), more so than if a paper diary is used (e.g., Stinson, To and Davis 2003). (Mathiowetz, Olson, and Kennedy, 2011b, p. 11) During the 30-day diary period, household members would be asked to keep receipts and record expenditures on a real-time basis using one or more of the diary tools provided, with a computer tablet with Internet connection as the primary recording tool. The tablet would be available for use by all household members. It would feature an instrument that mini- mizes burden and facilitates consistency in reporting of required details. An attached scanner and bar code reader would facilitate data capture of products and receipts. The proposal also recommends the use of a person- alized e-mail account to forward receipts and electronic records. The field representative would conduct a “wrap-up” interview and data review at the end of the diary period, with retrospective questions asked as needed to fill gaps in the diary-keeping. Mathiowetz/Olson/Kennedy encourage the adoption of multiple por- table means of capturing triggers that help household members remember a purchase so that it can be recorded later. These include the respondent’s own smart phone to record pictures, voice recordings, and notes. A small, simple pocket diary could also be used as a memory trigger. A panel component of the design is recommended to better support micro-level analysis for the entire year. It is formed by a subset of the overall sample that is asked to complete a 30-day diary for months 1, 7, and 13. The authors deliberated on the best length for the diary reporting interval and the panel, stating that: A critical design issue is the length of the panel—that is, for how many weeks or months we ask CU respondents to serve as diarists. This is defi- nitely an issue of cost-error tradeoffs, one that impacts the costs of data collection, the willingness to participate, the extent to which the data are impacted by panel conditioning/fall-off in reporting, and the need for month-to-month and/or year-to-year comparisons among the same CUs. No single design can optimize for all of these objectives, which is why we are recommending both a cross sectional and a panel component to the single integrated sample approach. (Mathiowetz, Olson, and Kennedy, 2011b, p. 15)

OCR for page 47
THE PANEL’S INVESTIGATION INTO THE ISSUES WITH THE CE 67 This proposal recommends that BLS continue to look at sources of administrative data for benchmarking and microlevel use. Mathiowetz/ Olson/Kennedy discuss a number of existing data sources, including three federal surveys that might be used to benchmark CE data. The authors also discuss nonfederal sources of data but do not incorporate a specific recommendation for their use into the current proposal. They state that they “were initially optimistic about micro-level integration of non-federal administrative data sources with CE data. However, the current state of knowledge about these 16 sources and the incredible task involved in turn- ing administrative records from private companies into survey data for all sampled persons makes us cautious in recommending their use for purposes other than nonresponse monitoring and benchmarks” (Mathiowetz, Olson, and Kennedy, 2011b, p. 16). Summary of the Two Proposals While the panel does not recommend implementing either of these two designs wholesale, the designs embody important insights that became central to its deliberations, and aspects of each design are incorporated into one or all of the panel’s three proposed designs presented in Chapter 6. Both proposals place renewed emphasis on the use of survey personnel to provide help, consultation, and monitoring of respondents’ efforts, and the panel’s thinking was clearly inspired by this model. The most notable adoption from the Mathiowetz/Olson/Kennedy pro- posal is a focus on supported self-administration and the use of a tablet data collection interface. These concepts are a central feature in all three of the panel’s prototypes described in Chapter 6. One prototype, Design A, Detailed Expenditures Through Self-Administration, follows much of the Mathiowetz/Olson/Kennedy proposal, as does the diary component of Design C, Dividing Tasks Among Multiple Integrated Samples. The panel’s proposed designs were inspired, in different ways, by the Westat proposal’s strong focus on encouraging the use of records. Design C, Dividing Tasks Among Multiple Integrated Samples follows the Westat design that encour- ages respondents to keep receipts and record expenditures throughout the quarter prior to a visit by the field representative. The Westat data reposi- tory proposal was viewed as desirable in the future but less practical in the nearer term.