Skip to main content

Currently Skimming:

3 Methodological Issues in the Measurement of Work Disability
Pages 28-52

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 28...
... As part of the effort to redesign the claims process, SSA has initiated a research effort designed to address the growth in disability programs, including the design and conduct of the Disability Evaluation Study (DES)
From page 29...
... One means by which SSA can monitor the size and characteristics of potential beneficiaries is through other ongoing federal data collection efforts. For both the conduct of the DES and monitoring of the pool of potential beneficiaries through the use of various data collection efforts, it is critical to understand the measurement error properties associated with the identification of persons with disabilities as a function of the essential survey conditions under which the data have been and will be collected.
From page 30...
... As a first step to achieving this goal, a common language and framework needs to be established for the enumeration and assessment of the various sources of error that affect the survey measurement process. The chapter draws from several empirical investigations to provide evidence as to the extent of knowledge concerning the error properties associated with various approaches to the measurement of functional limitations and work disability.
From page 31...
... Sampling Error Sampling error represents one type of nonobservation variable error; it arises from the fact that measurements (observations) are taken for only a subset of the population.
From page 32...
... , it is useful to further distinguish among the types of unit nonresponse, each of which may be related to the failure to measure different types of persons. For most household data collection efforts involving interviewers, the final outcome of an interview attempt is often classified into one of the following four categories: completed or partial interview, refusal, noncontact, and other noninterview.1 Survey design features can affect the distribution of cases across the various categories.
From page 33...
... Respondent as Source of Measurement Error Once the respondent comprehends the question, he or she must retrieve the relevant information from memory, make a judgment as to whether the retrieved information matches the requested information, and communicate a response.
From page 34...
... Most of the evaluations of the quality of proxy responses compared with the quality of self reports have focused on the reporting of autobiographical information (e.g., Mathiowetz and Groves, 1985; Moore, 1988) with some recent investigations examining the convergence of self and proxy reports of attitudes (Schwarz and Wellens, 1997~.
From page 35...
... The findings suggest that proxy reports of functional limitations tend to be higher than self-reports; the research is inconclusive as to whether the discrepancy is a function of overreporting on the part of proxy informants, underreporting on the part of self-respondents, or both. Interviewers as Sources of Measurement Error For interviewer-administered questionnaires, interviewers may affect the measurement processes in one of several ways, including: failure to read the question as written: variation in interviewer's ability to perform the other tasks associated with interviewing, for example, probing insufficient responses, selecting appropriate respondents, and recording the information provided by the respondent; and demographic and socioeconomic characteristics as well as voice characteristics that influence the behavior of the respondent and the responses provided by the respondent.
From page 36...
... For example, as noted above, interviewer variance is one source of variability that can be eliminated through the use of a self-administered questionnaire. However, the use of an interviewer may aid in the measurement process by providing the respondent with clarifying information or by probing insufficient responses.
From page 37...
... The focus for psychometrics is on variable errors; from the perspective of classical true score theory, all questions produce unbiased estimates, but not necessarily valid estimates, of the construct of interest. The confusion arises in that both statistics and psychometrics use the terms validity and reliability to sometimes refer to very similar concepts and to sometimes refer to concepts that are quite different.
From page 38...
... If repeated measurement resulted in consistent reports by all respondents, test-retest measures would indicate a high degree of reliability, not dissimilar to the conclusion drawn by statisticians. POTENTIAL SOURCES OF MEASUREMENT ERROR SPECIFIC TO PERSONS WITH DISABILITIES Similar to any other measurement of persons via the survey process, the identification of persons with disabilities is subject to the various sources of error 2Within survey research, the conduct of a reinterview under the same essential survey conditions as the original interview is an example of a test-retest assessment of reliability.
From page 39...
... Impairments or disabilities may limit a person's ability to participate in the survey process or limit access to the individual. The essential survey design features of a data collection effort can facilitate or limit access and participation of persons with disabilities.
From page 40...
... Many of the current measures of disability used in federal data collection efforts have not been subjected to testing methods common to new questions and questionnaires, for example, cognitive interviewing and behavior coding. Cognitive interviewing encompasses several techniques designed to elicit information about the respondent's comprehension of the question, the strategies by which the respondent attempts to retrieve information from memory, judgments as to whether the retrieved information meets the perceived goals of the question, and the formulation of responses.
From page 41...
... Although theories from cognitive psychology can provide information about the different cognitive processes by which self and proxy reporters engage in the response formulation process, one can turn to theories from social cognition to understand how individuals classify themselves and each other with respect to social categories. Although social cognition draws heavily from the theory and methods of cognitive psychology, as a subfield its focal point is on social objects, specifically, individuals or groups of individuals.
From page 42...
... Ambiguous social classification categories are also more likely to be subject to context effects; respondents use the specific wording of questions, the immediately prior questions, or the overall focus of the question as a means for interpreting questions on disability. From a theoretical perspective, it is not surprising to find that estimates of the number of persons with disabilities vary as a function of differences in the specific wording of the question, the number of questions used to determine the prevalence and severity of impairments and disabilities, the context of the questions immediately proximate to the question of interest, and the overall focus of the questionnaire (health versus employment versus program participation)
From page 43...
... by using different sets of ADL items and across different modes.5 They conclude that the measurements of functional limitations with respect to counts of ADLs, indications of the use of assistive devices or personal help, and indications of any difficulty are all subject to large amounts of measurement error, of which a substantial portion is random error. Similar to other empirical work (e.g., Mathiowetz and Lair, 1994)
From page 44...
... The finding suggests that higher levels of functional limitations reported by proxy respondents are not simply a result of selection bias, in which those with the most severe limitations are reported by proxy.6 Their analyses also suggest that there was no clear effect of mode of data collection on estimates of functional limitations. As illustrative of the variability and lack of reliability that is evident in survey estimates of functional limitations, Tables 3-1 and 3-2 present findings from the 1990 decennial census and the Content Reinterview Survey (CRS)
From page 45...
... The prevalence rate based on the Content Reinterview Survey: 1.3 percent, of which 36.5 percent were consistent responses.
From page 46...
... Although one can enumerate possible sources that explain the low rate of consistency between the two surveys, the lack of experimental design does not permit the identification of the relative contributions of the various design features to the overall lack of stability of these estimates. Empirical evidence shows that even when questions are administered under the same essential survey conditions, responses are subject to a high rate of inconsistency.
From page 47...
... Empirical Evidence Concerning Error in the Measurement of Work Disability The assessment of work disability in federal surveys has focused on variants of a limited number of questions, most of which concern whether the individual is limited in the kind or amount of work he or she is able to do or is unable to work at all because of a physical, mental, or emotional problem. Not dissimilar to the assessment of functional limitations, work disability is measured in data collection efforts that vary with respect to the essential survey conditions, the specific wording of questions, the number of questions asked, and the determination of severity, duration, and the use of assistive devices or environmental barriers.
From page 48...
... raises questions concerning the validity of the work disability measures currently in use, several empirical investigations raise questions about the reliability of these measures, not unlike the findings with respect to the measurement of functional limitations and sensory impairments. Once again, it can be seen that differences in the wording of the questions, the context in which they are asked, the nature of the respondent, and other essential survey conditions, including the data collection organization and the sponsorship of the survey, may contribute to differences in estimates of the working-age disabled population.
From page 49...
... Once again, it can be seen that between one-third and almost one-half of the respondents are inconsistent in their responses. More recent investigations have used the extensive data from NHIS-D to investigate alternative estimates of the population with work disabilities.
From page 50...
... If one looks at those who report receiving SSI or SSDI benefits, 75 percent report that they are unable to work and 13 percent report that they are limited in the kind or amount of work that they can perform, but 12.3 percent who report receipt of benefits do not report any limitation with respect to work. Although these variations in estimates derived from different surveys suggest instability in the estimates of the proportion of persons with work disabilities as a function of the wording of the question, the nature of the respondent, and the essential survey conditions under which the measurement was taken, they provide little information about measurement error within the framework of either survey statistics or psychometrics.
From page 51...
... . As with many single screening items, the question fails to address accommodations that facilitate participation or barriers that prohibit participation.
From page 52...
... Such techniques will aid in the understanding of the validity of the questions and, through the refinement of the wording of questions, hopefully improve the reliability of the items. Simply documenting that variation in the essential survey conditions of the measurement process contributes to different estimates of persons with work disabilities is not sufficient; the marginal effects of various factors need to be measured and the impact needs to be reduced through the use of alternative design features.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.