Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 12
DATA AND METHODOLOGICAL ISSUES FOR TRACKING FORMER WELFARE RECIPIENTS: A WORKSHOP SUMMARY nonresponse bias–comparing respondents to nonrespondents on major demographic characteristics –may not detect a bias. Researchers suggested that it would be better to statistically compare the major outcomes of interest between nonrespondents and respondents, and linking of administrative and survey data can help do so. For example, one major outcome that could be compared is the wage earned by former recipients, since wages for both respondents and nonrespondents should be included in an administrative data sets. Most employment outcomes, as well as program participation outcomes, could also be compared using administrative data. The size of the sample also affects how well nonresponse bias can be detected. If sample sizes are too small, statistical tests of significant differences between respondents and nonrespondents may not be detected. For this reason, some workshop participants argued for larger sample sizes. However, other workshop participants claimed that smaller sample sizes with high response rates are preferable to larger sample sizes with low response rates. Smaller sample sizes would allow researchers to devote more energy to achieving high response rates. Recall Concerns Another issue surrounding the quality of survey data is the reliance placed on respondent recall. Surveys should be timely enough so that respondents will be able to recall what happened during the period of interest and give accurate responses to survey questions. Some concern was expressed that the lag in generating administrative data would also make it difficult to conduct surveys on a timely basis, since the surveys cannot begin until the administrative data (from which the sample of leavers is being drawn) is available. For some programs (TANF, Medicaid, food stamps), the lag in the availability of data may only be a couple of months. For other records (e.g., birth certificates), the lag could be on the order of a year or more. CONCLUSION There are many unresolved issues regarding the data and methods for tracking those who leave welfare. Methods to make cross-state comparisons more valid is one unresolved issue. This issue relates to both definitional issues for leaver studies and to controlling for caseload dynamics and economic conditions across states and areas. How well administrative and survey data are able to track welfare leavers is another unresolved issue that needs further study. Despite these unresolved issues, the first round of ASPE-sponsored leaver studies will be valuable to states for building the capacity to do further research. States and counties will be developing their administrative data sets and further enhancing the qualities of those data. The grantees will also develop skills in conducting surveys. Such capacity building should be useful for future evaluations of welfare reform.
Representative terms from entire chapter: