The Survey of Income and Program Participation (SIPP) was launched in October 1983 following a multiyear development effort that included extensive field testing and experimentation to determine the best way to collect the information that the survey would be tasked to obtain. SIPP was designed to fill a need for data that would improve the measurement of income and give policy makers and researchers a much better grasp of how effectively government programs were reaching their target populations, how participation in different programs overlapped, and to what extent and under what circumstances people transitioned into and out of these programs. SIPP was also designed to answer questions about the short-term dynamics of employment, living arrangements, and economic well-being.
SIPP is a national, longitudinal survey of households that collects monthly data on labor force participation, income by source, social program participation and the components of eligibility, health insurance coverage, the demographic characteristics of household members, and a variety of other items—generally over a period of 2 to 4 years. SIPP’s monthly data on these interrelated characteristics, which were collected via a core questionnaire administered at 4-month intervals, distinguish it from all other U.S. household surveys and uniquely enable users to measure the short-term dynamics of poverty, employment, participation in safety net programs, and a wide range of other social and economic phenomena. SIPP’s topical modules, which vary in content from wave to wave, supplemented the core questionnaire with additional sets of items that appear either on a rotating
basis or, in some cases, only once in the life of a panel.1 In summary, “SIPP provides vital information for planning, evaluation, and improvement of government programs intended to address social and economic needs of the U.S. population” (National Research Council, 2009, p. 11).
SIPP’s usefulness depends on maintaining a survey process that produces high-quality information in a sufficiently timely and cost-effective manner. For this reason, the U.S. Census Bureau instituted from SIPP’s very beginning a program of review to improve and/or maintain quality while keeping costs within bounds. This report is part of the ongoing review of SIPP.
In 2006, the Census Bureau undertook a research and development program to reengineer SIPP in order to improve its cost-effectiveness, quality, and timeliness. Asked to address particular aspects of this effort, the National Research Council issued a report titled Reengineering the Survey of Income and Program Participation (National Research Council, 2009). This report made a number of recommendations for reengineering SIPP and evaluating the quality of the new design.
The Census Bureau conducted parallel tests of their proposed new design against the old design and also compared results from both designs against available administrative records. The comparisons were quite limited (Gathright et al., 2012; U.S. Census Bureau, 2013).2 These tests led the Census Bureau to conclude that the new design would not do worse than the old design, and in some instances might do better, in providing core monthly data of acceptable quality while reducing cost and the burden on respondents (U.S. Census Bureau, 2013). Subsequently the Census Bureau completed its redesign of SIPP using an event history methodology3 and implementing a number of the recommendations from the 2009 National Research Council report.
The Census Bureau went operational with the redesigned SIPP in 2014, collecting data for the reference year of 2013. As part of the transition, the 2008 SIPP panel collected its final waves of data in 2013, thus providing a
1 This report uses the word “panel” in two different senses. The report is authored by an expert panel, which was formed by the National Academies to carry out this study. In the second sense of the word, a longitudinal panel is a term used in statistical surveys for a design that collects data from the same sample of respondents over time. For example, the report refers to the 2008 SIPP panel as the cumulative waves of data over the 8-year life of that specific SIPP survey design. The meaning of “panel” is generally clear from the surrounding context, but where ambiguity may occur, the report uses “study panel” to refer to the authoring panel of experts and “SIPP panel” (as in “2008 SIPP panel”) for a statistical survey panel.
2 Because the redesign incorporates “event history methodology” using an event history calendar (EHC), experimental versions of the revised survey are referred to as SIPP EHC in the Census Bureau’s research.
partial bridge year for which both the old and revised SIPP methodologies were used.
This is not SIPP’s first design change. During SIPP’s early years, new panels were started annually, so that consecutive panels overlapped, and each panel ran for about 2-1/2 years. This allowed SIPP to provide consistent cross-sectional estimates from equal shares of newer and more experienced respondents. Beginning in 1996, the overlap was eliminated, panels were lengthened, and their sample sizes were increased. Cross-sectional consistency was sacrificed for the longer panels that became possible with the overlap eliminated. The 2008 panel, the last to include SIPP’s defining 4-month interviews and staggered rotation groups, was also the longest, exceeding 5 years in length.
The Census Bureau wished to have an independent evaluation of its new redesigned survey compared with the old design to complement its own evaluations, so it came to the National Academies of Sciences, Engineering, and Medicine with that proposal. The Statement of Task for this project is found in Box 1-1. The independent evaluation examines data collected for
calendar year 2013, comparing results from both the old and new designs with each other and with appropriate administrative data. In addition to an evaluation of data quality, the Census Bureau requested an independent assessment of the redesigned survey content from the perspective of the policy and research uses of the data, an evaluation of the effect of the new SIPP data collection instrument on respondent burden, and consideration of potential content changes for subsequent SIPP panels that would improve the quality and utility of the information.
This report has nine chapters. Chapter 2 recounts notable developments in SIPP’s history and provides an overview of the panel’s assessment activities. Chapter 3 reviews the uses of SIPP data as reflected in a search of the literature from 2000 through 2014. These uses reflect SIPP’s three unique features, including its detailed monthly data, representation of short-term dynamics, and the content of its topical modules. The chapter includes a discussion of selected agency applications of SIPP data, which highlight the diversity of SIPP uses, including a number of its most sophisticated applications. Chapter 4 covers the new SIPP questionnaire—specifically, the instrumentation and flow and the handling of data collected in the prior wave, which is used with the respondent’s permission to remind the respondent of program participation reported previously. Even when not used in prompts, the data on program participation in the first months of the reference period collected in the prior wave provide a potentially more accurate alternative to data on the same period reported more than a year later.
Chapters 5 through 8 provide the panel’s assessment of several facets of the new SIPP design and its implementation. Chapter 5 discusses methodological enhancements incorporated into the reengineered SIPP. These include improvements in imputation methodology, the greater use of administrative data, the application of computer-assisted recorded interviewing, and the inclusion of incentive experiments. Chapter 6 addresses the adequacy of the content of the 2014 SIPP relative to the older SIPP and with reference to user needs. The chapter includes a detailed analysis of how changes in content may affect a specific agency application. Chapter 7 examines aspects of data quality in the 2014 panel through a series of empirical analyses. These include a comparative assessment of the 2014 panel’s estimates of income and program participation relative to the 2008 panel and relative to independent benchmarks; an analysis of differences in the quality of reporting over the 12-month length of the reference period; a comparative evaluation of the 2014 panel’s capture of intrayear dynamics; and an examination of unit and item nonresponse. Chapter 8 reviews issues related to respondent burden and its measurement. The panel had envisioned being able to quantify changes in respondent burden as a result of the elimination of two interviews per year in exchange for a longer annual interview. However, although the enhanced computer-
assisted interviewing used with the 2014 panel captures very good data on interview length, the data on interview length in prior SIPP panels was found wanting. We document what we learned about respondent burden in the 2014 panel but can provide only a limited assessment of how burden may have changed between the older SIPP design and the new design. In Chapter 9, the panel broadens its focus and provides comments, concerns, and recommendations regarding the future of SIPP.
This report is based in part on independent programming and data tabulations from the 2014 SIPP panel, wave 1, iteration 12. To conduct this work, panel members and staff were certified as census agents and accessed internal SIPP files within a secure census facility. In March 2017, the Census Bureau released SIPP public-use data files for wave 1. Researchers wishing to recreate/replicate a portion of the panel’s analyses should use the public use data files. Results should be very similar to what the panel presents, but because the Census Bureau made additional modifications4 to the wave 1 data before releasing them to the public, the results are unlikely to be identical. If researchers want access to the internal wave 1 data, iteration 12, they should contact the Census Bureau for additional information.
4 These modifications included the topcoding of extremely large values to protect confidentiality and continued work on the Special Supplemental Program for Women, Infants, and Children. The study panel was not apprised of other specifics.
This page intentionally left blank.