National Academies Press: OpenBook
« Previous: Reporting Errors
Suggested Citation:"Sample Size." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume II, Technical Papers. Washington, DC: The National Academies Press. doi: 10.17226/1853.
×
Page 37
Suggested Citation:"Sample Size." National Research Council. 1991. Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume II, Technical Papers. Washington, DC: The National Academies Press. doi: 10.17226/1853.
×
Page 38

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

DATABASES FOR MICROSIMULATION: A COMPARISON OF THE MARCH CPS AND SIPP 37 people who correctly report their participation. In one state, a large proportion of AFDC recipients incorrectly reported their benefits as general assistance. Other studies are investigating the so-called seam problem that first showed up in SIPP, but, on further investigation, turns up in other longitudinal surveys, such as the ISDP and the PSID. The seam problem concerns the tendency of respondents to report transitions as occurring between interviews rather than within interviews. Thus, in SIPP, exits from and entrances to programs such as AFDC and food stamps are reported more often between pairs of months that span waves than between pairs of months within waves. In the case of food stamps, it appears that the net effect on cross-sectional estimates is small—that is, total exits and entrances from SIPP are close to rates derived from food stamp administrative records, even though the timing of SIPP transition reports is often in error. On the other hand, for SSI, entrance rates from SIPP are significantly higher than those shown by program records (Jabine, King, and Petroni, 1990:59–60). Another problem that confronts panel surveys is “time-in-sample” bias, whereby participation in the survey changes responses over time (either because respondents actually change their behavior, such as applying for benefits after learning about the existence of a program from the survey, or because respondents become better or worse at answering the questions after repeated exposure to the interview). Only limited and inconclusive studies of time-in-sample bias have been conducted of SIPP to date, although work on this topic is in progress (see Lepkowski, Kalton, and Kasprzyk, 1990). Sample Size The review of data quality problems up to this point generally favors the SIPP, particularly given the fact that data quality issues from the perspective of analyzing the low-income population have simply not received the same kind of scrutiny in the CPS as they have in SIPP. However, the CPS has a signal advantage for microsimulation modeling in terms of sample size. Microsimulation, as a technique for producing policy impact estimates, is distinguished by its ability to provide detailed distributional information about gainers and losers. Adequate sample size is essential to the reliability of distributional estimates. A particular requirement in the case of modeling the AFDC program, given that eligibility provisions and benefit levels vary by state, is that the sample be of sufficient size to permit identifying all 50 states and the District of Columbia. Ideally, the sample size would be sufficient to support state estimates for this program. Table 6 indicates the estimated number of SIPP sample persons with specified characteristics in a sample of 20,000 households (the size of the 1984 and 1990 panels) and one of 12,000 households (the size of other panels). The numbers are small for many important income support programs. For example,

DATABASES FOR MICROSIMULATION: A COMPARISON OF THE MARCH CPS AND SIPP 38 the larger sample contains only 705 AFDC and 750 SSI recipients, while the smaller sample contains only 420 AFDC and 450 SSI recipients. Analyses of subgroups are severely compromised by these limited numbers of cases.12 For example, fewer than 50 cases would be available from the larger SIPP sample to analyze the small but important component of the AFDC caseload with earnings. Moreover, the SIPP data files do not separately identify all 50 states and do not support state-by-state estimates. TABLE 6 Estimated Numbers of SIPP Sample Persons for Selected Subpopulations Subpopulation For a Sample of 20,000 Households 12,000 Households All persons 53,700 32,200 Adults 41,400 24,850 Persons 65 and over 5,965 3,580 Persons 75 and over 2,600 1,560 Persons in households with income less than poverty (monthly) 7,400 4,440 Recipients of Social security (aged and disabled) 7,475 4,485 Railroad retirement 175 105 AFDC 705 420 General assistance 245 150 SSI (federal and state) 750 450 Medicare 6,510 3,905 Medicaid 4,125 2,475 WIC 570 340 Multiple recipients of Food stamps and AFDC 675 405 Food stamps and SSI 285 170 Social security and food stamps 385 230 Social security and housing assistance 335 200 Medicaid and SSI 795 480 Food stamps and housing assistance 315 190 SOURCE: Jabine, King, and Petroni (1990: Table 9.1). The corresponding sample size figures from the CPS would be about three times larger than those of the larger SIPP panel, providing a muc h more robust basis for estimating the impact of proposed policy changes. The CPS also identifies all states, although the sample size is currently not large enough to support reliable estimates for more than a few of the largest states.13 12 It should be noted, however, that the population typically being analyzed is not the reported participant population but the eligible population, which, in the case of AFDC, might amount to as many as 1,200 cases in the larger and 700 cases in the smaller SIPP sample. 13 Sampling variability in the March CPS has noticeable effects on the effort in the TRIM2 model to control simulated AFDC participants to targets from administrative records on a state-by-state basis. Typically, two to four states have more AFDC participants reported in the administrative data than there are units simulated in the CPS as eligible to participate. Another six to eight states typically have reported participants accounting for 90–99 percent of units that are simulated to be eligible. However, in some years, even more states have more reported participants than units that are simulated to be eligible —both the March 1986 and the March 1988 CPS had eight such states. This situation makes it more than usually difficult to control the simulation to state participant totals and also to characteristics of the caseload on a national basis, such as the proportion with earnings (see further discussion of this point in the text). Moreover, in examining March CPS files for 1983–1988, Giannarelli (1990) found wide variations in simulated participation rates (reported participants divided by units simulated to be eligible) from year to year for many states. These variations appeared largely due to fluctuations in the denominator rather than in the numerator. Although a comprehensive investigation was not attempted, there was no ready explanation, such as legislative actions, for large changes in eligible populations by state (including changes that resulted in fewer simulated eligible units than reported participants); rather, these changes appeared related to fluctuations observed in the number of households below the poverty line in pairs of March CPS files due to sampling variability.

Next: Data Delivery »
Improving Information for Social Policy Decisions -- The Uses of Microsimulation Modeling: Volume II, Technical Papers Get This Book
×
Buy Paperback | $100.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

This volume, second in the series, provides essential background material for policy analysts, researchers, statisticians, and others interested in the application of microsimulation techniques to develop estimates of the costs and population impacts of proposed changes in government policies ranging from welfare to retirement income to health care to taxes.

The material spans data inputs to models, design and computer implementation of models, validation of model outputs, and model documentation.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!