Reengineering the Survey of Income and Program Participation (National Research Council, 2009) provided extensive background on the Survey of Income and Program Participation (SIPP) and offered recommendations to help guide the U.S. Census Bureau through a redesign to improve quality, cost-effectiveness, and timeliness. This section of the current report draws heavily on the 2009 report to provide a short summary of the initial implementation and various modifications to this survey from its beginnings.1
SIPP launched its first panel (called the 1984 panel) in October 1983 with a sample size of approximately 21,000 households. “SIPP design called for sample members [household members age 15 and older] to be interviewed every 4 months [for 8 or 9 waves of interviews] in order to increase the accuracy of answers to core questions” (National Research Council, 2009, p. 19). Unfortunately, budget cuts necessitated a 7 percent reduction in sample size following wave 4. Between 1985 and 1993, new SIPP panels were initiated, with wave 1 beginning in February of each year. The design allowed data from different panels to be combined for cross-sectional estimates.
1 The principal source of information for this section is Chapter 2 of Reengineering the Survey of Income and Program Participation (National Research Council, 2009). That report, in turn, drew upon National Research Council (1993), Citro (2007), and the Census Bureau’s online history of this survey (Census, n.d.-a). Our current report does not discuss the background and previous revisions to SIPP as deeply as do these sources, and the reader is directed to them for more detailed information.
Difficulties in maintaining consistent panel size and length—as well as other factors—continued to erode the continuity and quality of SIPP data, and in 1993 the National Research Council (NRC) issued a list of recommendations for improving the survey in its report, The Future of the Survey of Income and Program Participation (National Research Council, 1993). Following that evaluation, SIPP underwent a major redesign. The most significant change (implemented in 1996) was replacing the overlapping panel design with an abutted panel design that “ran a panel through to completion before starting another panel” (National Research Council, 2009, p. 24). The NRC panel had recommended extending the panel length to 4 years, which the Census Bureau adopted, but starting a new panel every 2 years, which would have meant that two panels were always in the field at the same time.
Panel size and length remained unstable throughout the period following this redesign, as it has throughout SIPP’s life cycle:
- The 1996 panel had a sample size of 40,200 households, which included an oversampling of a high-poverty stratum. This panel was designed to be followed for 12 waves.
- The redesign called for a new 4-year panel to be initiated in 2000; this new panel was delayed until 2001 with a proposed sample size of 40,500 households. Low-income households were oversampled. Unfortunately, “the sample was reduced by 25 percent for budgetary reasons in wave 2, and the members of the remaining sample households were followed for as many waves as possible through nine waves” (National Research Council, 2009, p. 25).
- The 2004 panel began with a sample size of 51,400 households (a marked increase from 2001). As per 2001, low-income households were oversampled. Unfortunately, the last 4 waves of this 12-wave panel saw a 58 percent reduction in sample size and the elimination of topical modules (described below).
During this time period, the SIPP questionnaire contained a core set of questions including household demographics, income, participation in social programs, employment status, and health insurance coverage. Each SIPP interview also included one or more topical modules (TMs) that collectively covered a wide range of topics. Box 2-1 summarizes the use of TMs in SIPP in the years 1984-2004.
In the past, Census Bureau decision makers have viewed timeliness as less critical for SIPP than for other Census Bureau surveys such as the Current Population Survey Annual Social and Economic Supplement (CPS ASEC) and the American Community Survey, from which key estimates and data are released each September. Nevertheless, SIPP has become known
for inconsistent lags between data collection and data release and periodic long gaps between releases. These vagaries in data release have combined to make planning that depends on data availability difficult for SIPP users. The report addresses this topic in Chapter 9.
These issues with timeliness have been compounded by frequent re-releases of files to correct errors identified after the initial release. Rarely have these revisions affected more than a small number of items, but experienced SIPP users have come to understand that they may have to rerun portions of their analyses in the future. Greater concerns about the quality of SIPP data have arisen from occasional studies documenting declines in the reporting of income and program participation relative to independent benchmarks (Card et al., 2004; Chenevert et al., 2015; National Research Council, 2009).
An overall secular decline in survey response rates (De Heer, 1999; Schoeni et al., 2013) has contributed to rising SIPP costs while periodic funding reductions have led to sample cuts to panels with one or more waves already completed, creating analytical challenges for internal and external users alike. Lastly, as a longitudinal survey, SIPP is inherently more difficult to use than the CPS ASEC and other popular cross-sectional surveys. SIPP’s multiple waves per year and staggered rotation groups—key design features—compound the challenges that SIPP presents to less-experienced programmers and novice users.
In 2006, the Census Bureau announced that SIPP would be terminated and replaced by the Dynamics of Economic Well-Being System (DEWS).2
2 The DEWS was proposed as a longitudinal program that would measure program participation with a heavy reliance on administrative data and a survey component that was substantially scaled back (David Johnson, Reengineering the SIPP: The New Dynamics of Economic Well-Being System, presentation to the National Research Council, Committee on National Statistics, August 24, 2006, Washington, DC).
An immediate and intensive response was initiated from the SIPP data user community that included a letter-writing campaign to Congress, lobbying effort, and media coverage. The result was that SIPP was reinstated with a new panel launched in 2008 with a sample size of 45,000 households. The DEWS was formally abandoned. Congress directed the Census Bureau “to work with stakeholders to reengineer SIPP to develop a more accurate and timely survey to capture the economic dynamics of the country” (National Research Council, 2009, p. 29).
This directive led to the 2009 NRC report that has been quoted throughout this chapter. In its assessment, the NRC panel listed its views of the strengths of the 2008 SIPP and the survey’s major weaknesses (see Box 2-2). The report further stressed the value of SIPP as being “uniquely able to support monthly estimates of participation in, and eligibility for, many federal and even state programs. . . . It is also unique in its ability
to support models of short-term dynamics over a wide range of characteristics, including models of earnings dynamics based on its monthly data on employers and wages. . . . SIPP’s topical modules expand the survey’s content to include types of data that few other surveys collect. . . . SIPP’s topical module data on disability have become the model of excellence for disability measurement” (National Research Council, 2009, p. 31).
Several recommendations from the 2009 report are particularly relevant for this report (see Box 2-3). Among these are the recommended goals to focus on the quality of short-run dynamics and to make additional use of administrative data. The report also recommended studying the tradeoffs between survey quality, respondent burden, and cost when fielding longer but less frequent interviews. On the technical side, the report recommended implementing a model-based imputation system and engaging in a “major program of experimentation and evaluation” to demonstrate that the event history approach could produce satisfactory data on income and program dynamics. From a survey administration perspective, the report recommended that the reengineered SIPP and the old SIPP be bridged over
a period of 2 years. Finally, the 2009 report recommended that the Census Bureau strive to release SIPP data within 1 year of data collection.
The reengineering affected many areas of SIPP design and execution including content, frequency of contact, questionnaire methodology, editing and imputation, survey management, and data structure. A primary driver for this new design was the expectation of costs savings compared to the implementation of the old design (Fields and Callegaro, 2007; Johnson3).
A change integral to the new design is that a selected household is now contacted once per year instead of three times, and the reference period becomes the previous full calendar year4 rather than the previous 4 months. To assist the interviewer with the execution and coordination of this longer recall period, an event history calendar (EHC) module was developed as part of the SIPP electronic questionnaire instrument.5 (This module is described in more detail in Chapter 4.) The panel anticipates that the primary effects of this fundamental change will be a reduction in data collection costs, a change to the way burden is placed on the respondent, and potential impacts on quality. An anticipated secondary effect is a change in the data structure, which in some aspects is simpler for data users and in other aspects is more complex.
A second fundamental reengineered change is that there no longer are any supplemental TMs attached to the core questionnaire6 (Box 2-1, above, describes previous TMs). The only exception to date is the Social Security Administration Supplement, which was implemented because the redesigned questionnaire did not meet all of the data requirements of this important client. The supplement was fielded as a telephone follow-up to wave 1. There is now only a core questionnaire, to which some of the important questions in previous TMs have been added. The remainder of the questions from previous TMs was dropped from the reengineered SIPP.
A third major change is that the SIPP team has incorporated some innovative methods into their processes: a new model-based imputation system, computer-assisted recorded interviewing methodology, and an incentive
3 Unpublished presentation by David Johnson titled SIPP Highlights and the New 2014 SIPP, at a public forum to discuss a proposed redesign for the 2014 SIPP, hosted by the National Academies of Sciences, Engineering, and Medicine, July 10, Washington, DC.
4 The recall period is actually longer than a year, because the interview takes place 1 to 5 months after the end of the reference year.
5 Unpublished presentation by Jason Fields to the study panel at the public session of its March 2014 meeting.
6 Unpublished presentation by Jason Fields to the study panel at the public session of its March 2014 meeting.
program to encourage response.7 There is also a structured integration of various paradata streams for improved management and evaluation.
In developing and implementing the new design, the Census Bureau faced a number of challenges arising from a decision to lower SIPP’s costs by replacing the system of staggered interviews at 4-month intervals with a single interview conducted in the early part of each year. These challenges included the following:
- extending the reference period from 4 to 12 months while continuing to collect monthly data;
- extending the distance between the interview and the end of the reference period from less than 1 month to between 1 and 5 months;8
- fielding the survey for only 4 to 5 months out of the year instead of continuously, which increases turnover among the field staff and requires hiring and training a greater number of new field representatives each year;
- losing the TMs but needing to retain much of their content;
- confronting the likelihood that, absent the introduction of effective countermeasures, a high proportion of transitions will be reported in the first month of the reference period, which is January for all respondents; and
- requiring field representatives to master the EHC module, which is the principal method of collecting monthly data for the lengthened reference period.
In the face of all of these challenges, the Census Bureau launched a new survey at a time when the secular decline in survey response rates seemed to be accelerating (De Heer, 1999; Schoeni et al., 2013).
The reengineered 2014 SIPP panel was initially fielded in February 2014 with a sample size of approximately 53,000 households. The study panel expected to have access to data from both wave 1 of the 2014 SIPP and the final year of the 2008 SIPP by the end of 2014, for use in developing this report. Both of these waves had a reference period of calendar year 2013, and they thus formed a partial bridge year that could be used for comparisons between the old and new SIPP designs. In anticipation of hands-on data analysis at the Census Bureau facility in Suitland or at
7 Unpublished presentation by Jason Fields to the study panel at the public session of its March 2014 meeting.
8 SIPP interviews are scheduled to occur between February 1 and May 31.
one of several Statistical Research Data Centers, all study panel members applied for and obtained clearance from the Census Bureau with “special sworn status.”
To explore the current uses of SIPP, the study panel conducted a literature review (with the helpful assistance of the Research Center at the National Academies) of journal articles, books and book sections, working papers, conference proceedings papers, reports, and doctoral dissertations that used SIPP data. The search was restricted to works with publication dates of 2000 through the time of the search (mid-2014). Wanting to explore more deeply the use of SIPP by federal agencies, the study panel also contacted knowledgeable analysts in four agencies with questions regarding the use of SIPP in important programs. The panel supplemented these discussions with various panel members’ knowledge of the use of SIPP in specific program areas. The information gained from these activities form the base for Chapter 3, “Uses of SIPP.”
Study panel members also reviewed the new SIPP questionnaire. Chapter 4 provides a summary of the instrumentation and the questionnaire flow, as well as the handling of preloaded data. This outline was used in the panel’s work, but it will also serve as welcome documentation for SIPP data users. Further, panel members compared the questions on this new questionnaire against the uses of SIPP outlined in Chapter 3, to see whether the current important uses of SIPP would be able to be continued or whether the new design would necessitate changes from previous uses and analyses. This review forms the basis for Chapter 6.
The panel examined several of the methodological advances implemented in the 2014 SIPP, including model-based imputation, greater use of administrative data, and a continuation of incentive experiments. The panel also attempted to review recordings of SIPP interviews to examine potential reasons for seam effects in SIPP (Moore, 2008).9 However, the recording system was implemented in a way that made it impossible to use the recordings to examine such effects. All of this is covered in Chapter 5.
In Chapter 8 of this report, the panel examines the components of respondent burden, attempting to address the question of whether the new SIPP design lowers, maintains, or increases the burden on respondents compared to the old design. For reasons argued in Chapter 8, the panel concluded that it did not have sufficient information to make such a determination.
9 Seam effects are the tendency for reported changes—for example, in employment or participation in a government program—to occur disproportionately at the seams between survey reference periods. For SIPP, seam effects are evident in higher rates of change between months at the end of one 4-month reference period and the beginning of the next 4-month reference period than between consecutive pairs of months within reference periods.
At this point, the study panel comments on delays in gaining access to the 2014 wave 1 SIPP data, essential for use in comparing key estimates from the two designs. Access to “near final” wave 1 data took more than 2 years, and the delay limited the time the panel had for analysis. The wave 1 data were collected in the first half of 2014; the panel received iteration 12 of SIPP wave 1 in mid-February 2017. The panel ran its analysis programs on this version of the data, and the panel was able to begin reviewing analysis tables used in this report in March 2017. There were considerable additional analyses that the panel would have liked to explore more closely, to better understand the causes of observed differences, but a tightly constrained timeline to finish the study required the panel to focus on just a subset of the overall analysis in Chapter 7.
In the study panel’s view, the Census Bureau staff worked hard to create the redesigned SIPP. They were required to completely remake the questionnaire instrument, the edit and imputation systems, and the summary programs. Such changes will invariably result in some delays from the original timeline. However, it is important to acknowledge that the time delays in producing edited/imputed data, relative to what had been planned for this study, led to a less extensive analysis than the panel had intended. In particular, although the panel had hoped to include an analysis of the first two waves of data and thus be able to address questions relating to the transition between waves, suitable data from wave 2 were not available for the preparation of this report. In addition, with the late release of a near-final internal version of the wave 1 data, certain topics had to be dropped from our analysis, as there was not sufficient time to resolve a number of questions regarding the choice of data elements and the interpretation of particular codes. Nevertheless, the analysis of wave 1 data presented in Chapter 7 covers a broad range of topics that provide a solid and evidence-based perspective on the quality of the data collected in the 2014 panel.
This page intentionally left blank.