National Academies Press: OpenBook

Measuring Poverty: A New Approach (1995)

Chapter: APPENDIX B Data Sources for Measuring Poverty

« Previous: APPENDIX A Dissent
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 391
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 392
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 393
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 394
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 395
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 396
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 397
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 398
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 399
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 400
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 401
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 402
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 403
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 404
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 405
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 406
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 407
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 408
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 409
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 410
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 411
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 412
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 413
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 414
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 415
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 416
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 417
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 418
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 419
Suggested Citation:"APPENDIX B Data Sources for Measuring Poverty." National Research Council. 1995. Measuring Poverty: A New Approach. Washington, DC: The National Academies Press. doi: 10.17226/4759.
×
Page 420

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

APPENDIX B 391 APPENDIX B Data Sources for Measuring Poverty This appendix provides information on the major features of four continuing surveys that provide data relevant to measuring poverty and economic well-being: the Consumer Expenditure Survey (CEX), the Current Population Survey (CPS) March income supplement, the Panel Study of Income Dynamics (PSID), and the Survey of Income and Program Participation (SIPP). The appendix also provides detailed comparisons of the features and quality of the March CPS and SIPP. The March CPS is the current source of the nation's official income and poverty statistics; we recommend that SIPP become the official source instead (see Chapter 5). (The report of the Panel to Evaluate SIPP made the same recommendation; see Citro and Kalton, 1993:8). MAJOR FEATURES OF THE CEX, MARCH CPS, PSID, AND SIPP Consumer Expenditure Survey The CEX is sponsored by the Bureau of Labor Statistics (BLS) and conducted by the Census Bureau, with a current budget of about $12 million per year. Historically, surveys of expenditures by consumers (with varying names and formats) were fielded at roughly 10- to 15-year intervals from 1901 to 1950. The 1950 survey was the first one to be officially designated the Consumer Expenditure Survey. The 1950 and 1960-1961 surveys used annual recall for expenditures. In 1972-1973, the current design of a quarterly Interview Survey and a two-week Diary Survey was introduced. In 1980 the CEX became a continuing survey. Its major uses are to provide the market basket

APPENDIX B 392 for the Consumer Price Index and to provide data for analysis of expenditures in relation to demographic and other characteristics. (For information on the CEX, see Bureau of Labor Statistics, no date; Jacobs and Shipp, 1990.) Design and Use The Interview Survey includes a sample of 6,800 consumer units (of which about 5,000 are used for quarterly estimates), interviewed in person at 3-month intervals. Households are in the sample for five quarters (the first interview has a 1-month recall and is used for bounding purposes and to collect an inventory of durable goods). There are monthly rotation groups: each month, one-fifth of the sample is new and one-fifth is completing its fifth and final interview. Household response rates to the Interview Survey have averaged about 85 percent since 1980. There appears to be little time-in-sample bias in the survey, but considerable recall error: for example, apparel expenditures reported for the first month prior to the interview are 124 percent of the monthly mean, while those reported for the third month prior to the interview are only 76 percent of the mean (Silberstein, 1989). The Diary Survey includes a sample of 6,000 consumer units, each of which records daily expenditures for 2 weeks. Interviews are spread out over the year. Interviewers make three visits to each unit: an initial visit to drop off the first-week diary, a second visit to drop off the second-week diary and pick up the first-week diary, and a third visit to pick up the second-week diary. Household response rates to the Diary Survey have ranged from about 85 to 90 percent. The CEX covers the U.S. civilian noninstitutionalized population, including military in civilian housing, students in university or college housing, and group homes. (The 1982-1983 interviews excluded the rural population because of budget cuts.) The reporting unit is the consumer unit, defined as one of the following: a single person living alone or sharing a household with others but financially independent; family (household members related by blood, marriage, or adoption); two or more persons living together who share responsibility for two of three major expenses—food, housing, and other expenses. The respondent is any member of the consumer unit aged 16 or older with most knowledge of the unit's finances. People who leave a sampled address are not followed. In its publications, BLS makes use of data from both the Interview and the Diary Surveys to develop a total picture of expenditures. Comparisons with data from the National Income and Product Accounts (NIPA) indicate that the CEX estimates for some categories are quite complete; these include rent, utilities, fuels, and public services; vehicle purchases; and gasoline and motor oil. But for other categories the CEX estimates fall considerably short: for example, from information provided by BLS, the ratios of CEX to NIPA

APPENDIX B 393 estimates from 1987 to 1990 were only about 0.70 for food, 0.75 for household furnishings and equipment, 0.60 for apparel and services, and 0.60 for public transportation (see also Bosworth, Burtless, and Sabelhaus, 1991; Gieseman, 1987; Slesnick, 1991a).1 Researchers who analyze expenditure data typically work with the Interview Survey, from which users can construct annual data on expenditures and income. (The Interview and Diary Survey samples are independent, so there is no way to actually link the microrecords.) However, some proportion of consumer units in the sample for the Interview Survey do not have observations for all four quarters because of dropping out of the survey or moving away from the sampled address. (The sample, technically, is one of addresses. Consumer units that move from the sampled address are not followed, but, instead, the new occupants are interviewed.) Also, because of the rotation design, a large proportion of observations with complete information must have their data adjusted in some manner in order to obtain calendar-year estimates. Content of the Interview Survey • Demographic characteristics • Work experience Information is obtained for consumer unit members aged 14 and over on work experience and job characteristics in the previous quarter and in the prior 12 months (the latter information is obtained at the second and fifth interviews). • Detailed expenditures Detailed quarterly data (per each payment or bill) are obtained for expenditure categories that comprise an estimated 60-70 percent of total expenditures, including rent, facilities, and services for rented living quarters (including housing assistance subsidies); payments on mortgages, lump-sum home equity loans, and line of credit home equity loans; ownership costs (extra payments on mortgage principal, ground rent, cooperative or condominium fees); telephone expenses; utilities and fuels; construction, repairs, alterations, and maintenance of property; purchases of appliances, household equipment, and other selected items; household equipment repairs, service contracts, and furniture repair and reupholstering; purchases of home furnishings and related household items; purchases of clothing; purchases of infants' clothing, watches, jewelry, and hairpieces; purchases of sewing materials; payments for leased vehicles; purchases of vehicles; disposals of vehicles; vehicle maintenance and repair; vehicle equipment, parts, and accessories; licensing, registration, and inspection of vehicles; other vehicle operating expenses; premiums for other than health insurance; premiums for health insurance; coverage by Medicare and Medicaid; medical and health expenditures 1 However, the NIPA and CEX data are not strictly comparable.

APPENDIX B 394 and reimbursements; educational expenses paid by the consumer unit and by others (including for nursery school and day care centers); trips by type of expense for each trip completed during the quarter; reimbursements for trip expenses; local overnight stays; and gifts of commodities for people outside the family. • Global (or usual) expenditures Global (or usual) expenditures are obtained for categories that comprise an additional estimated 20-25 percent of total expenditures, including quarterly amounts for subscriptions, memberships, books, and entertainment expenses; quarterly amounts for miscellaneous items (e.g., funerals, catered affairs, accounting fees, home services, including baby-sitting and in-home child care, pets and pet expenses, alimony, child support, charitable contributions); usual weekly expenses for supermarkets and specialty food stores; usual monthly expenses for liquor and food away from home; quarterly benefits from food stamps (and months received) and other meals provided free; quarterly amounts for selected services and goods (e.g., laundromats); usual weekly expenses for tobacco products; and usual monthly expenses for haircuts for men and women members of the consumer unit. • Expenditures in last 12 months Data on expenditures in the prior 12 months are obtained at the fifth interview for occupational expenses (e.g., union dues) and contributions, including alimony, child support, college expenses for students attending school away from home, gifts to people outside the consumer unit, contributions to charities, contributions to religious organizations, contributions to educational organizations, political contributions, and other contributions. • Real assets An inventory of major household appliances and features of the dwelling unit, together with descriptions of each owned property, are obtained at the first interview, and changes in ownership of property and mortgages are obtained each quarter. The rental value of owned home and the value of owned home are obtained. • Financial assets Data obtained include current credit balances (e.g., credit cards, credit unions, bank loans); credit balances a year ago; finance charges paid in the prior 12 months (e.g., on revolving credit cards, late payments to doctors); changes in financial assets, comparing value last month and 1 year ago (e.g., savings accounts, checking accounts, savings bonds, securities); purchases and sales of stocks, bonds, or mutual funds in the prior 12 months; investments to or withdrawals from own business or farm in the prior 12 months; amounts owed currently and 1 year ago by others to someone in the consumer unit; and settlements during past year on insurance policies. All of these items are obtained at the fifth interview; current credit balances are also obtained at the second interview. • Income in the prior 12 months Data on income for the prior 12 months are obtained at the second and fifth interviews. Sources obtained for each consumer unit member aged 14 and over include wages or salary, nonfarm

APPENDIX B 395 self-employment income, farm self-employment income, Social Security or railroad retirement, and Supplemental Security Income (SSI). Sources obtained for the consumer unit as a whole include worker's compensation and veterans' benefits; public assistance; interest on savings accounts and bonds; regular income from dividends, royalties, and estates and trusts; income from pensions or annuities from private and public sources; net income or loss from roomers or boarders; net income or loss from rental property; income from alimony, child support, and regular contributions from persons outside the consumer unit; lump-sum payments; money from the sale of household furnishings or other belongings; other money income (e.g., scholarships, foster care payments); and refunds (e.g., from federal income tax or insurance policies). • Taxes Data are obtained at the second and fifth interviews on tax deductions from the last paycheck of each consumer unit member aged 14 and over (federal income tax, state and local income tax, and Social Security payroll tax and deductions for pensions). Data are also obtained for the prior 12 months on payments by the consumer unit as a whole for additional federal income tax (beyond that withheld from earnings), additional state and local taxes, property taxes not reported elsewhere, and other taxes not reported elsewhere. Sales taxes are calculated from information provided for individual expenditures and are included in the component expenditures. CPS March Income Supplement The CPS is a continuing survey, begun in the 1940s. Income questions were first asked in 1945 (for income year 1944).2 Since 1956 the income questions have been part of the supplement each March; since 1970 the March supplement has also included questions on work experience in the prior year. (Supplements in other months cover such topics as voting behavior, educational enrollment, and fertility and marital history.) BLS sponsors the core of the CPS, which is designed to provide monthly unemployment rates. The Census Bureau conducts the survey and sponsors the March income supplement. The total budget for the CPS is about $28 million per year, of which about $2 million to $3 million is for the March supplement. (For information on the March CPS, see Bureau of the Census, 1992b; and Citro, 1991.) Design The CPS has a rotating design. Households are in the sample for 4 months, out of the sample for 8 months, and in again for 4 months. Hence, there is 50 2 Since about 1960, however, the income data for 1944 and 1945 and the nonfarm income data for 1946 have been omitted from the Census Bureau's P-60 series money income reports.

APPENDIX B 396 percent overlap in the sample for poverty estimates from year-toyear. The sample size is about 60,000 households. The sample covers the U.S. civilian noninstitutionalized population. The March supplement also includes military in civilian housing and an additional sample of 2,500 housing units that had contained at least one adult of Hispanic origin as of the preceding November interview. The reporting unit is the household, with unrelated individuals and families also identified. The respondent is each household member aged 15 and older, but proxy responses are readily accepted. Interviews are in person for the first month and then by telephone to the extent possible. People who leave a sampled address are not followed. (Response rates and other aspects of data quality are reviewed below.) A major redesign of the CPS was recently implemented (see Cohany, Polivka, and Rothgeb, 1994). The redesign includes respecification of the sample design on the basis of information from the 1990 census about the geographic distribution and other characteristics of the population, changing the data collection mode to computer-assisted personal interviewing and computer- assisted telephone interviewing (CAPI/CATI), and making important wording changes to the core questions on labor force participation. No changes were made to the March income supplement (except to put the questionnaire into a CAPI/CATI format), but the responses may be affected by one or more aspects of the redesign of the core survey. Content The content of the core CPS interview includes • demographic characteristics; and • labor force participation, hours worked, reason for part-time work, reason for temporary absence from job, industry and occupation in prior week, job search behavior in the previous 4 weeks if not working and when last worked, usual hours and usual earnings, union membership, reason left last job, and reasons for looking for work (for selected rotation groups). The content of the March supplement includes • labor force participation and job history in the prior calendar year for each household member aged 15 or older; • annual income for the prior calendar year for each household member aged 15 or older by detailed source—about 30 types of regular cash income are identified separately, including wages and salaries, net self-employment income, Social Security for oneself or a spouse, Social Security for one's children, railroad retirement, unemployment compensation, veterans' compensation, black lung payments, disability payments, SSI, Aid to Families with Dependent Children (AFDC), other welfare, child support, alimony, private

APPENDIX B 397 pension, federal civilian pension, military pension, state or local government pension, annuity income, income from estates and trusts, other retirement or disability or survivor payments, money from relatives or friends, interest income, dividends, net rental income, income from individual retirement accounts, Pell Grants, other educational financial aid, other cash income; • participation in noncash benefit programs, including energy assistance, food stamps, public housing, and school lunch; and • health insurance coverage. Panel Study of Income Dynamics The PSID is a continuing panel survey of a cohort of families, begun in 1968. The survey is sponsored and conducted by the University of Michigan Survey Research Center (SRC). Since 1983 the National Science Foundation has been the principal funder, with substantial continuing support from the Office of the Assistant Secretary for Planning and Evaluation in the U.S. Department of Health and Human Services. (The survey was originally funded by the Office of Economic Opportunity; other agencies that have provided funds include the U.S. Departments of Labor and Agriculture, the National Institute of Child Health and Human Development, the National Institute on Aging, and the Ford, Sloan, and Rockefeller foundations.) The current annual budget is about $2.6 million, which includes direct and overhead costs for the core survey only, not including separately funded supplements. (For information on the PSID, see Hill, 1992; Survey Research Center, 1989.) Design The sample comprises three components: (1) 2,900 families interviewed in 1968 from the SRC national sampling frame, representative of the civilian, noninstitutionalized population; (2) 1,900 low-income families with heads under age 60 who were interviewed in 1968 from the 1966-1967 Survey of Economic Opportunity (SEO); and (3) 2,000 Hispanic families added in 1990. Currently, 9,000 families (including original sample families and the subsequent families of their members) are interviewed once each year. The reporting unit is the family, defined as one of the following: a single person living alone or sharing a household with other nonrelatives; a family of members related by blood, marriage, or adoption; an unmarried couple living together in what appears to be a fairly permanent arrangement. The respondent is the family head, usually the adult male head if there is one. Interviews are conducted annually and, since 1973, mostly by telephone (92%). Original sample members who leave to form separate family units are followed (including children born to original sample members), and information is obtained about the coresidents in their new families. Sample members who are institutionalized

APPENDIX B 398 are tracked and interviewed subsequently if they return to a family setting. The PSID experienced a large sample loss—24 percent—at the initial interview in 1968, but additional sample loss dropped to 8 percent of the eligible families at the second interview, and it was only 1-2 percent at each interview thereafter (Survey Research Center, 1989: Table 2a). The initial large sample loss was partly due to the PSID sample design, which originally included a national probability sample of about 2,900 families and a sample of about 1,900 low-income families drawn from the sample used for the 1967 SEO. Several factors increased the nonresponse from the SEO sample, including the requirement by the Census Bureau that SEO families sign a release allowing their names to be given to the PSID (Hill, 1992). The extent to which attrition introduces bias into estimates from the PSID is not clear. Several studies in the 1980s found that, although cumulative sample loss was over 50 percent (52% by 1980 and 58% by 1985), there was no evidence that attrition correlated with individual characteristics in a way that would produce biased estimates. For example, Becketti et al. (1988:490) found no evidence that attrition ''has any effect on estimates of the parameters of the earnings equations that we studied." Duncan, Juster, and Morgan (1984) also found that response rates were just as high in the PSID among families in the lowest income decile as in the middle or upper income deciles (see also Curtin, Juster, and Morgan, 1989, and other studies cited in Hill, 1992). However, Duncan and Rodgers (1991) found bigger differences in poverty rates for white children between the PSID and the March CPS in 1981-1986 than in 1967-1971 (the PSID rates were lower in both periods). They attribute the finding to the fact that, as of 1986 (before the addition in 1990 of a new Hispanic sample), the PSID represented only about one-third of the Hispanic children reported in the CPS while it represented all non-Hispanic white and black children. One indicator of data quality is that about 95 percent of heads and spouses provide "adequate responses" for labor and asset income so that the responses do not have to be edited. The percentage of adequate responses has been in the range 94-98 percent over the life of the survey (Survey Research Center, 1989: Table 5). Content The PSID collects the most detailed information about family heads and, since the late 1970s, about wives and cohabitors. The core content includes • demographic characteristics; • employment information—current and employment history in past year; • income sources and amounts for the head for the past calendar year

APPENDIX B 399 (including which months received) from wages or salaries; bonuses, overtime, tips, or commissions; professional trade or practice; farming or market gardening; roomers or boarders; extra jobs; rent; dividends, interest, trust funds, or royalties; AFDC; SSI; other welfare; Social Security (including separately listed amounts for other family members); veterans' benefits; other retirement pay, pensions, or annuities; unemployment compensation; worker's compensation; alimony; child support; help from relatives; and anything else; • income sources and amounts for the spouse for the past calendar year (including which months received) from earnings; unemployment compensation; worker's compensation; and interest, welfare, pensions, child support, or any other source (with each source to be separately listed); • income sources and amounts for other individual family members aged 16 and over for the past calendar year (including which months received) from earnings from first and second jobs; and any other income such as pensions, welfare, interest, gifts, or anything else (with each source to be separately listed);3 • income earned by individual family members under aged 16 and family lump-sum income (e.g., inheritance or insurance settlements) in past calendar year; • public assistance—food stamps (amount in past calendar year and specific months in which received), housing subsidies, energy assistance, and Medicaid or other welfare medical services; • estimate of federal taxes paid (based on information about income, exemptions, dependents living outside the household, whether itemized, mortgage interest payments, and property taxes); • housing, including current value, remaining mortgage principal, monthly mortgage payment for owned home, monthly rent, and annual utility costs; • estimate of annual food costs (in home and away from home) from reports of average weekly expenditures; • financial assistance to people living elsewhere; • housework time; • geographic mobility; • socioeconomic background; • health, religion, and military service; and • county-level data (unemployment rate, wage rate for unskilled workers, labor market demand conditions). Event histories (dated to the month) are recorded for demographic, employment, and poverty characteristics. Supplemental topics have included 3 It is difficult to assign a value to the number of income sources collected in the PSID, because of the question format for family members other than the head, which asks for particular sources to be named without going through a specified list.

APPENDIX B 400 achievement motivation, attitudes, child care, cognitive ability, commuting to work, disability and illness, do-it-yourself activities, extended family and kinship ties, fertility and family planning, financial situation and health of parents, food stamp and SSI eligibility, fringe benefits, hospitalization, housework, housing and neighborhood characteristics, housing utilities, impact of inflation, inheritances, job training, retirement plans and experiences, retrospective histories, saving behavior, smoking and exercise, spells of unemployment and time out of the labor force, time and money help with emergencies, time use, and wealth. In 1990, there were some links to Medicare records. Survey of Income and Program Participation SIPP is a continuing panel survey, begun in 1983, that is sponsored and conducted by the Bureau of the Census. The current annual budget is about $30 million to $32 million. (For information on SIPP, see Citro and Kalton, 1993; and Jabine, King, and Petroni, 1990.) Design The current design introduces a new sample panel each February. Each sample of households (panel) is interviewed every 4 months for 32 months (or 2.67 years); because of budget restrictions, some panels have had fewer than eight interview waves.4 There are monthly rotation groups. Until 1992 interviews were in person to the extent possible; beginning in February 1992 the first and sixth interviews have been in person with the rest by telephone. Under this design, three panels are in the field in most months of each year. (For information about response rates and other aspects of data quality, see below.) The sample covers the U.S. civilian noninstitutionalized population and members of the armed forces living off post or with their families on post. Sample size has varied from 12,500 to 23,500 households per panel; 20,000 households is the current design target. The reporting unit is the household, with unrelated individuals and families also identified. The respondent is each household member aged 15 and older; proxy responses are accepted if necessary. Original sample members aged 15 and older who move to new house- holds are followed and information is obtained about the coresidents in their new households. Sample members who are institutionalized are tracked and interviewed subsequently if they return to a household setting. The proposed redesign of SIPP recommended by the Panel to Evaluate SIPP calls for introducing a new panel every 2 years instead of every year; interviewing each panel at 4-month intervals for 48 months (12 waves) instead 4 The 1993 panel will be extended for a total of 10 years, with annual interviews after the first 3 years of interviews every 4 months.

APPENDIX B 401 of 32 months (8 waves); and increasing the sample size per panel from 20,000 to 27,000 households. Under this design, two panels would be in the field each year (see Citro and Kalton, 1993). The redesign of SIPP proposed by the Census Bureau Senior Management Redesign Team calls for introducing a new panel every 4 years (i.e., with no overlap across panels); interviewing each panel at 4-month intervals for 48 months; and increasing the sample size per panel to 50,000 households. The redesign of SIPP will be fully implemented in the 1996 panel, with a dress rehearsal in 1995. In addition to extending the length and increasing the sample size of each panel, features of the redesign include new samples drawn on the basis of information from the 1990 census, switching the data collection mode to CAPI/CATI, and changes in selected questionnaire items based on recommendations from the Panel to Evaluate SIPP and others. The new sample design for SIPP will also include an oversample of addresses in which the residents were below the poverty level in 1989, based on information from the 1990 census; proxy characteristics, such as housing tenure and family type, will be used for oversampling addresses for which the census long-form information on poverty status is not available. Content The content of the current SIPP core interview includes • demographic characteristics; • monthly information on labor force participation, job characteristics, and earnings; • monthly information on public and private health insurance coverage; and • monthly information on detailed sources and amounts of income from public and private transfer payments; information—monthly for the most part—on noncash benefits (food stamps, school lunch, etc.); and information for the 4-month period on income from assets. In total, about 65 separate sources of cash income are identified for each household member aged 15 and over, together with benefits from seven in-kind programs—for a few sources annual amounts are obtained in topical modules (see Citro and Kalton, 1993: Tables 3-1, 3-2). Data are also collected in topical modules, which are asked once or twice in each panel, on a wide range of subjects, including • annual income and income taxes; • educational financing and enrollment; • eligibility for selected programs (including expenditures on shelter, out-of- pocket medical care costs, and dependent care); • employee benefits (1984 panel only);

APPENDIX B 402 • housing costs and finance; • individual retirement accounts; • personal history (fertility, marital status, migration, welfare recipiency, and other topics); and • wealth (property, retirement expectations and pension plan coverage, assets and liabilities). In addition, each panel includes a topical module with variable content designed to respond to the needs of policy analysis agencies. Topics covered to date have included characteristics of job from which retired, child care, child support, disability status of children, energy use, extended measures of well- being, functional activities, health status and utilization of health care, home health care, household relationships, housing costs and finance, job offers and reservation wage, long-term care, pension plan coverage, retirement plans, support for non-household members, training, work expenses, and work schedule (see Citro and Kalton, 1993: Table 3-13). Summary Comparisons In evaluating the usefulness of a survey for measuring poverty, it is important to consider several characteristics: sample size and design; the amount of detail for data on income, taxes, assets, and expenditures; and the quality of the information. Table B-1 summarizes some key characteristics of the CEX, March CPS, PSID, and SIPP; the next section discusses in more detail the quality of the income data. (The last section provides a detailed comparison of the March CPS and SIPP.) The surveys range in size from 5,000 consumer units (CEX) to 60,000 households (March CPS). The CEX, March CPS, and PSID collect income data for a number of separate cash and in-kind sources for the previous calendar year or the 12 months prior to interview waves, with some differences among the three; the SIPP obtains income data at each 4-month wave, with monthly reporting for most sources. All the surveys, except the March CPS, collect information with which to determine a variety of taxes. The CEX and SIPP obtain detailed information on asset holdings; the PSID ascertains home value and equity; the March CPS does not ask about assets except to obtain income flows. Finally, all the surveys, except the March CPS, obtain regular information on such expenditures as food and shelter; the CEX obtains extensive expenditure information. Quality of Income Data A detailed comparison of data quality across the four surveys is beyond the scope of this appendix, but some rough aggregate comparisons for income reporting can be made.

APPENDIX B 403 All four surveys clearly experience net underreporting of income.5 The very rough comparisons of aggregate incomes for the population as a whole suggest that the March CPS captures about 90 percent of the regular cash income estimated by independent sources (Bureau of the Census, 1989a: Table A2; 1992b: Table C-1) and that the CEX (Interview Survey) in turn captures about 90 percent of the income reported in the March CPS (Cutler and Katz, 1991: Table A2). Aggregate income amounts for SIPP and the March CPS are virtually the same (Jabine, King, and Petroni, 1990: Table 10.8): SIPP obtains higher reports of nonearnings income (by about 6%), but somewhat lower reports of earnings (by about 2%) compared with the March CPS. The assumption is that some people are reporting net rather than gross earnings to SIPP. If SIPP obtained as complete reporting of earnings as the March CPS, it would capture 1-2 additional percentage points of regular income. In inferring from comparisons of poverty rates across the surveys, it appears that income underreporting at the lower end of the distribution is most problematic in the CEX, followed by the March CPS, with the PSID and SIPP obtaining more complete reporting. Thus, in the period 1984-1991, poverty rates based on before-tax cash income from the CEX were higher than the rates from the March CPS, which, in turn, were higher than those from SIPP (see Table 5-12). Duncan and Rodgers (1991) find that poverty rates in the PSID are below those in the March CPS and comparable to those in SIPP. Duncan, Smeeding, and Rodgers (1992: Table 1) consistently find a smaller percentage of families with incomes below $15,000 in the PSID than in the March CPS; the difference ranged from 0.4 to 3.0 percentage points in the period 1967-1988. (As noted above, PSID estimates of low-income families do not appear biased by differential attrition, although underrepresentation of Hispanics may account for some of the CPS-PSID difference.) The evidence suggests that the greater the emphasis on income reporting in a survey, the lower is the estimated poverty rate. Thus, the less complete income reporting at the lower end of the distribution in the March CPS relative to SIPP is probably partly due to the fact that the March CPS is a supplement to a survey in which the major emphasis is on collecting monthly labor force information. Income reporting is probably particularly poor in the CEX Interview Survey also partly because the CEX is an expenditure survey, not an income survey. The secondary role of income data is evident in many aspects of the Interview Survey design and questionnaire content. Thus, income is asked for the preceding 12 months, rather than quarterly; only a few major income sources are asked separately for each adult member of the 5 Net underreporting is a combination of underreports and overreports of income. For specific income types, classification errors also occur. Inferences of net underreporting, obtained from comparing survey estimates with those from the National Income and Product Accounts, other independent sources, or other surveys, must be made with care, as differences in definitions and processing procedures can affect the validity of the comparisons.

APPENDIX B 404 TABLE B-1 Summary Comparisons of CEX, March CPS, PSID, and SIPP Feature Consumer CPS (March Panel Study of Survey of Expenditure Income Income Income and Survey Supplement) Dynamics Program (Interview) Participation Sample 5,000 60,000 9,000 families; 40,000 Size and consumer households; overrepresents households Design units; each each low-income (50,000 unit in sample household in families; proposed); for 5 quarters; sample for 8 continuing panel new panel rotation group months over 2- with annual each February design; year period; interviews (every 4 years quarterly rotation group proposed); interviews design; each original monthly sample adult interviews in panel for 32 (income months (48 supplement months once per year) proposed); interviews every 4 months Income Annual data Data for prior Data for prior Data for about Data for 12 months calendar year calendar year for 70 cash and in- prior to 2nd for about 35 about 25 cash kind sources at and 5th cash and in- and in-kind each 4-month interviews; 5 kind sources sources with wave, with sources for specific months monthly individuals, received reporting for 11 sources for most sources consumer unit; major in- kind benefits Tax Data Information to None Information to Information to determine determine determine federal, state, federal and state federal, state, and local income taxes; and local income taxes; payroll taxes; income taxes; payroll taxes; property taxes payroll taxes; property property taxes taxes; sales taxes consumer unit; and the total number of sources asked about is considerably smaller than in the other surveys. Experience gained in the Income Survey Development Program (ISDP—the predecessor to SIPP) and SIPP itself suggests that each of these factors hampers complete income reporting.6 6 Experiments in the ISDP found that a "short" income form produced less complete reporting than the "long" form subsequently used in SIPP and that asking a single respondent about

APPENDIX B 405 Feature Consumer CPS (March Panel Study Survey of Expenditure Income of Income Income and Survey Supplement) Dynamics Program (Interview) Participation Asset Detailed None, except Regularly, Detailed Holdings inventory of ascertains information inventory of Dataa property home about home real and holdings and ownership value and financial household mortgage assets and appliances; debt; liabilities information at occasionally, once each 5th interview information panel; more on credit about saving frequent balances for behavior and measures for current month wealth assets and 1 year relevant for ago; assistance information on programs financial asset holdings currently and 1 year ago Expenditure Detailed None Monthly rent Information Data quarterly data or mortgage once or twice for costs; annual each panel on expenditures utility costs; last month's estimated to average out-of-pocket account for 60– weekly food medical care 70% of total costs; child costs, shelter expenditures; support costs global (or payments (mortgage or usual) rent and quarterly data utilities), for dependent expenditures care costs, estimated to and child account for support another 20– payments 25% of total a All four surveys obtain data on income flows from assets. One problem in estimating poverty from income surveys is that they often show people with zero or very low-income amounts that are not credible. As a result, analysts often find that people with very low-incomes are not substantially worse off than people with higher income levels on such measures as ownership of vehicles, air conditioning, and number of bathrooms in their income receipt by other members of the household produced less complete reporting than asking each member about his or her own income (Ycas and Lininger, 1983:27). Also, no imputations are performed in the CEX for missing income information.

APPENDIX B 406 homes. Presumably, findings of this sort stem from such phenomena as self- employed people who report zero income or losses on a business accounting basis but who have adequate cash flow for their own needs. Or some of these people may be students or others with low cash income but access to assets or other resources. Or some people may simply underreport their income, particularly if it is from "off-the-books" sources. Scattered evidence suggests that SIPP may have fewer reporting problems of this sort, perhaps because SIPP takes more of a cash-flow approach to reporting of self-employment income. For example, in 1984, the proportion of people with income-to-poverty ratios of less than 50 percent was 38 percent of the total poverty population in the March CPS but only 29 percent in SIPP (Bureau of the Census, 1986: Table 6; Radbill and Short, 1992: Table 1). Also, SIPP data for 1984 (Radbill and Short, 1992: Table 10) showed steeper relationships of income-to-poverty ratio categories with such well-being measures as home and vehicle ownership than did the 1980 census data analyzed by Christopher Jencks (private communication). For example, home ownership ratios were as follows from the two data sources: Unit's Income Level Relative to Poverty Home Ownership Ratios 1980 Census 1984 SIPP Income less than zero .80 .19 Zero or positive income up to 0.50 of poverty .38-.41 .19 Income 0.50-0.99 of poverty .38-.46 .33 Income 1.00-1.99 of poverty .50-.62 .49 Income 2.00 or more of poverty .78 .65-.84 THE MARCH CPS AND SIPP COMPARED This section provides a more detailed comparison of the March CPS income supplement and SIPP, focusing on the adequacy of information from each survey that is relevant to measuring poverty. It also discusses the ability of each survey to construct poverty measures with shorter or longer than annual accounting periods, to construct poverty measures for states, and to construct other measures related to poverty (e.g., measures of access to material goods or access to health care along the lines of work by Mayer and Jencks, 1993). Finally, it offers some comparisons of the quality of income reporting in the two surveys. Categories of Information Taxes The March CPS income supplement asks no questions about any type of tax payment. Currently, for use in its experimental poverty estimates, the Census Bureau models federal income taxes, state income taxes, and

APPENDIX B 407 payroll taxes and imputes annual tax payment amounts to the CPS records (see Bureau of the Census, 1992a; Nelson and Green, 1986). Generally, SIPP includes twice for each panel (in the summer or fall period) a topical module that asks about tax payments for the previous year. Questions on tax filing status, number of exemptions, type of form filed (joint, single, etc.), and schedules filed (e.g., Schedule A) are answered by more than 90 percent of respondents. However, questions on adjusted gross income, itemized deductions, tax credits, and net tax liability have high nonresponse rates, primarily because respondents are asked to produce their tax forms and use them as the basis for answers to these questions, but only about one-third do so. In addition, there are nonresponse rates of 7 to 14 percent for specific items for those people who do use their tax forms to respond (Bureau of the Census, no date(a)). The Census Bureau has work in progress to develop a tax estimation model for SIPP similar to the one used for the March CPS. The SIPP tax information, even with quality problems, should help in the development of a reliable model. Nonmedical In-Kind Benefits The March CPS asks about the benefits a household received the previous year from the School Lunch Program (how many children in the household received free or reduced-price lunches during previous year); housing assistance (whether living in public housing or receiving rent subsidy); the Food Stamp Program (how many people were covered in prior year, how many months stamps were received, and the total value of stamps for the prior year); and energy assistance (how much money was received since previous October). SIPP obtains considerably more detailed information: monthly information on recipiency and benefit amounts for food stamps and the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC); information every 4 months about energy assistance, school lunch, and school breakfast; and information twice a panel about public housing and subsidized housing. Medical Benefits/Costs The March CPS asks which household members were covered during the previous year by Medicare; Medicaid; Civilian Health and Medical Programs of the Uniformed Services (CHAMPUS), Civilian Health and Medical Programs for the Veterans' Administration (CHAMPVA), or military health care; and private health insurance. For the last, questions are asked about whether the coverage was in a plan in one's own name offered by a current or former employer or union; whether the employer or union paid for all or some of the costs; and who else in the household was covered under the plan. Separate questions are also asked about how many children under age 15 were covered during the prior year by Medicare or Medicaid, another health insurance plan, or by the insurance plan of someone not residing in the household.

APPENDIX B 408 SIPP obtains considerably more detailed information, distinguishing among coverage provided by the following programs: Medicare, Medicaid, CHAMPUS, CHAMPVA, military health insurance, current employer or union health insurance, former employer health insurance, and other health insurance. Coverage is ascertained every 4 months for Medicare and every month for the other programs. SIPP also determines which children in the household are covered under Medicaid or other health insurance. With regard to out-of-pocket medical insurance and medical care costs, the March CPS obtains no information. SIPP asks each panel once about last month's unreimbursed medical care costs. Child Care and Other Work Expenses The March CPS asks no questions about child care arrangements or costs or other work expenses. (Occasionally, supplements in other months have included questions on child care arrangements and costs.) SIPP obtains information once each panel on last month's dependent care costs incurred to enable a household member to be employed. All panels to date have also included a module on child care that asks detailed information about child care arrangements and costs. The 1984-1987 panels included a module on work expenses, including commuting and other costs. Child Support Payments The March CPS asks no questions about children outside the household or about payments to support such children. All SIPP panels to date have included a detailed module on child support. Asset Holdings The March CPS asks no questions about the value of asset holdings or liabilities, but information is obtained on whether the house is owned or being bought or is rented. Questions are also asked on total income in the prior year from interest on investments (e.g., savings accounts, certificates of deposit); dividends from stocks and mutual funds; and net income from rent (including income from rented property, roomers or boarders, and royalties). SIPP obtains detailed information on asset ownership (and income flows) every 4 months. SIPP also obtains a detailed balance sheet of financial and property assets once each panel, and some assets are valued twice a panel (see Citro and Kalton, 1993: Table 3-2). Nonresponse rates are low for the core asset ownership questions, for example, about 1 percent for savings accounts and stocks; but they are generally high for the questions on 4-month income flows, for example, 30-35 percent for interest and 30 percent for reinvested dividends (Jabine, King, and Petroni, 1990: Table 5.5). After imputation for nonresponse, SIPP obtains an estimated 80 percent of the dividend income reported to the Internal Revenue Service (IRS; compared with 61% in the March CPS) and an estimated

APPENDIX B 409 65 percent of reported interest income (compared with 79% in the March CPS, which uses an improved imputation procedure). The March CPS estimate of interest income using the old imputation procedure was only 62 percent of the IRS estimate. Both SIPP and the March CPS fall much farther short of dividend and interest income aggregates when the comparison is made to the National Income and Product Accounts (NIPA); however, the NIPA estimates require extensive adjustments, which may not be complete, for comparability with household survey estimates (see Jabine, King, and Petroni, 1990: Table 10.3) Nonresponse rates to the questions on value of asset holdings in the topical modules are also very high, although lower than were experienced in the ISDP: 35-40 percent for value of own business, market value of stocks and mutual fund shares, and debt on these assets. After imputation, SIPP obtains higher estimates of equity in homes and motor vehicles in comparison with estimates of the Federal Reserve Board because of somewhat higher estimates of gross value and considerably lower estimates of debt in SIPP, but it obtains considerably lower estimates of equity in noncorporate business, value of financial assets, and consumer debt (see Eargle, 1990: Table D-2).7 Ability to Support Other Estimates Shorter or Longer Term Measures The March CPS provides annual measures of income and poverty. Almost no information is available with which to construct longer term measures. (Because of the rotation group design, one-half of the sample for one year's March supplement is in the sample for the next year's March supplement; hence, it could be possible to construct measures of poverty status over 2 years for this subsample.) Only very limited information is available with which to construct shorter term measures: information is obtained about months of receipt of food stamps and AFDC and about weeks worked, weeks unemployed, and weeks out of the labor force in the prior year. SIPP, because of its monthly (or 4-month) income information, can be used to construct poverty measures for months, quarters, or other periods 7 SIPP is not alone in experiencing quality problems with the collection of asset data. A number of panel surveys provide estimates of wealth that fall short of those from the Survey of Consumer Finances, a complete survey of household wealth that includes a household sample together with a sample of high-income households drawn from the IRS Statistics of Income file who agree to participate (see Curtin, Juster, and Morgan; 1989; Juster and Kuester, 1991). Recently, the Health and Retirement Study achieved more complete reporting of asset values by a technique called ''bracketing," in which holders of an asset who don't know or refuse to provide a value are asked if the value is above a certain amount; if yes, whether it is above another (higher) amount, and so on. High rates of response are obtained by this method, although the response categories are very broad (Juster and Suzman, 1993:16-20).

APPENDIX B 410 shorter than a year. Under the current design, SIPP can provide limited longer term measures: for example, transitions in poverty status from one year to the next or estimates of the proportion entering poverty in the first year of a panel who are still poor 1 year or 1-1/2 years later. Under the proposed redesign to extend the length of each panel, SIPP would be able to support longer term measures with accounting periods of up to 4 years. (The 1993 SIPP Panel will be extended to cover a 10-year period, with annual interviews beginning after the first 3 years of interviews every 4 months.) State Estimates The CPS sample size and design make it possible to analyze poverty for geographic areas as well as population groups. The Census Bureau recently published state poverty rates (Bureau of the Census, 1992c: Table B). Standard errors for yearly estimates were small for large states (e.g., less than 5% for California and New York in 1991) but high for small states (e.g., 20% for Delaware and New Hampshire in 1991). Standard errors were smaller for 3-year average poverty rates (e.g., 3.5% for California and 15% for New Hampshire). SIPP is less able to provide reasonably reliable state poverty estimates with the current sample size of about 40,000 households (based on combining two panels) and a design that does not disproportionately sample smaller states. The redesign will increase the sample size to 50,000-55,000 households, but it still may not provide as reliable state estimates as does the March CPS. The proposed oversampling of low-income households in SIPP, beginning with the 1996 panel by using information from the 1990 census, may increase the reliability of the data for detailed poverty analysis. Related Measures The March CPS does not obtain information that would enable the development of alternative measures of economic well-being, such as an index of access to material goods or an index of health status and access to health care. SIPP also does not regularly obtain information that would permit the development of measures of access to a wide range of material goods. However, it does ascertain twice in each panel ownership of the residence and of a vacation home or undeveloped lot, together with information on the make, model, and year of each car, van, or truck owned by someone in the household and whether the household owns a motorcycle, boat, recreational, or other vehicle. Occasionally, a topical module has obtained additional information. For example, Wave 4 of the 1984 SIPP panel asked about housing conditions, including use of a list of consumer durables—range, oven, refrigerator, freezer, washer, dryer, dishwasher, black-and-white television, color television, air conditioning (see Radbill and Short, 1992: Table 10). Wave 6 of the 1991 SIPP panel and Wave 3 of the 1992 panel included a module on extended measures of well-being. This module has questions on consumer durables (e.g., whether the family has a clothes washer or dryer); living conditions

APPENDIX B 411 (e.g., whether the house is in good repair and the neighborhood is safe); ability to meet expenses for basic needs (e.g., whether the family was ever evicted for nonpayment of rent); and sources of help (e.g., how much help you could expect to get from family living nearby if you were sick). SIPP has often obtained information on health status and access to health care in topical modules. For example, Wave 3 of the 1984 panel asked about self-reported health status, days in last 4 months sick in bed, number of doctor visits in the last 12 months, and number of hospital nights in the last 12 months. Quality of Income Data A key issue in assessing the adequacy of the March CPS or the SIPP for measuring poverty is the quality of the estimates. Although some research on data quality has been done for the March CPS and considerably more research has been done for SIPP, it is not possible at this time to provide an estimate of the total error in the poverty or other income statistics from either survey. There is some comparative information available on what might be termed internal indicators of quality, such as population coverage ratios and household and item response rates, that may indicate potential problems in survey estimates. There is also some limited comparative information on aggregate statistics from the two surveys, such as the percentage of total income of various types that is captured, compared with independent sources. Such comparisons do not identify underlying components of error and must be made with care, given different definitions and procedures between the two surveys and between the surveys and other sources. Despite limitations, the available information on data quality (discussed below) shows clearly that there is reason to be concerned about the quality of income and poverty statistics from both SIPP and the March CPS. Some indicators, such as item nonresponse rates and amounts of Social Security and other income types collected, in comparison with independent estimates, favor SIPP, while other indicators, such as household nonresponse rates and amount of wages and salaries collected, in comparison with independent estimates, favor the March CPS. Overall, however, SIPP appears to be doing a somewhat better job of measuring income, particularly at the lower end of the income distribution. SIPP's more frequent interviews and detailed probing for receipt of different income sources appear to be identifying more recipients of many income types than the March CPS, although the dollar amounts reported are not always more complete in SIPP than in the CPS. Perhaps more important, SIPP is arguably in a better position to take steps to improve income quality, because of its focus on income and program participation, whereas the March CPS is necessarily constrained as an appendage to a labor force survey. Indeed, no changes to the March income supplement were even

APPENDIX B 412 considered as part of the recent redesign of the main CPS (except those changes, such as the sample redesign and the introduction of CAPI/CATI, that apply to the entire survey), and the research program on data quality is limited. SIPP will undergo a major redesign to improve the usefulness of the data (notably the extension of each panel to 48 months), which will likely include changes and improvements to the questionnaire. SIPP also has an active research program to investigate and improve data quality (see Jabine, King, and Petroni, 1990). Population Undercoverage It is well known that household surveys rarely cover the population as well as the decennial census (see Shapiro and Bettin, 1992; Shapiro and Kostanich, 1988). SIPP and the March CPS are no exception. Thus, even after adjustment for survey nonresponse, the SIPP data for March 1984 covered only 85 percent of black men and 91-93 percent of all other people when compared with census- based population estimates, while the March 1984 CPS covered only 84 percent of black men and 90-94 percent of all others. By age, black men in the 20-39 age categories were generally the worst covered. Coverage ratios were even worse in March 1986 for black men for both SIPP and the March CPS—80 and 82 percent, respectively (Jabine, King and Petroni, 1990: Tables 10.12, 10.13). More recent data indicate that the situation has not improved: the March 1992 CPS covered only 79 percent of black men, 87 percent of black women, and 90-95 percent of white and Hispanic men and women (Coder, 1992a: Table C-1).8 The Census Bureau uses ratio-estimation procedures to adjust SIPP and March CPS survey weights for population undercoverage. The weights are adjusted so that the population estimated from the surveys agrees with the updated decennial census-based population estimates by age, sex, race, and Hispanic origin. SIPP weights are also adjusted to agree with the March CPS weights by household type. However, these ratio adjustments do not correct all coverage errors. First, they do not correct for the undercount in the decennial census itself: although it is minimal in total—net undercount was estimated to be between 1 and 2 percent of the population in 1980 and 8 Other household surveys, including the Consumer Expenditure Survey, also exhibit population undercoverage (see Shapiro and Bettin, 1992). Recent work indicates that population undercoverage in surveys may not be as high as previously believed, relative to the decennial census, when comparisons are made that exclude census overcounts (see Shapiro, Diffendal, and Cantor, 1993). However, survey undercoverage rates remain high: for example, the undercoverage rate for black males was 89 percent in the February, May, August, and November 1990 CPS, when compared with a 1990 census estimate adjusted for overcounts (versus 84% when compared with an unadjusted estimate). Moreover, these rates do not include the undercount in the census itself relative to demographic estimates of the population.

APPENDIX B 413 1990—it is substantial for some population groups. In 1980, an estimated 9-10 percent of black children under age 5 were missed, as were about 15 percent of middle-aged black men (Fay, Passel, and Robinson, 1988:Tables 3.2, 3.3; Robinson, 1990). (The decision was recently made to use census-based population estimates that are adjusted for the census undercount as weighting controls for the CPS and SIPP.) Second, the ratio adjustments do not correct for characteristics other than age, sex, and ethnic origin on which the undercovered population might be expected to differ from the covered population. Fay (1989) analyzed within- household undercoverage in the CPS relative to the decennial census, using a 1980 CPS-census match. His results are suggestive of ways in which weighting adjustments do not adequately compensate for household survey undercoverage. For example, he finds that about one-fourth of adult black men who are counted in the census but not in the CPS are household heads, whose households should be categorized as married-couple households in the CPS but instead are categorized as households headed by unmarried women. The correlates of undercoverage (besides age, race, and sex) are not definitely established. However, analysis of the 1980 census postenumeration survey and of other survey, administrative records, and ethnographic data suggests that undercount rates are higher for the following groups: household members other than the head, spouse, and children of the head; unmarried people; people living alone or in very large households; and people residing in central cities of large metropolitan areas (see Citro and Cohen, 1985; Fein, 1989). In addition, there is evidence that the rate of undercount increases as household income decreases. Overall, these tentative findings suggest that minorities, unattached people, and low-income people are at much greater risk of not being covered in household surveys than other people and, hence, that undercoverage affects SIPP and March CPS-based estimates of poverty. Both the overall poverty rate and, perhaps more important, the distribution of poverty across groups may be affected. The Census Bureau has recently begun a research program to investigate the undercoverage problem in greater depth and take steps to reduce it (Shapiro and Bettin, 1992). Household and Person Nonresponse Relative to many other surveys, the CPS obtains high response rates. Yet, 4-5 percent fail to respond to the CPS, and another 9 percent of people in otherwise interviewed households fail to respond (Citro, 1991:26). In addition, a considerable number of people, although responding to the basic CPS labor force questionnaire, do not respond to the March income supplement. Nonresponse to the supplement is treated together with other cases of failing to answer one or more specific questions (see below). To adjust for whole

APPENDIX B 414 household nonresponse to the basic CPS, the Census Bureau increases the weights of responding households; to adjust for person nonresponse, it imputes a complete data record for another person with similar demographic characteristics. These procedures assume that respondents represent the characteristics of nonrespondents; this assumption has not been adequately tested. Like all household surveys, SIPP experiences household nonresponse, and like all longitudinal surveys, it suffers cumulative sample loss or attrition at each successive interview wave (some households that fail to respond at an interview wave are subsequently brought back into the survey). In addition, it experiences "type Z" nonresponse—the failure to obtain information, either in person or by proxy, for individual members of otherwise cooperating households. Attrition in SIPP to date has been highest at the first and second interviews— 5-8 percent of eligible households at Wave 1 and 4-6 percent of eligible households at Wave 2. Thereafter, the additional loss is only 2-3 percent in each of Waves 3-5 and less than 1 percent in each subsequent wave. By Wave 6 (after 2 years of interviewing), cumulative sample loss is 18-20 percent of eligible households; by Wave 8, it is 21-22 percent (Bowie, 1991). The Panel to Evaluate SIPP estimated that total sample attrition at the end of 12 waves (4 years) might be 25 percent (Citro and Kalton, 1993:102). The attrition experience in SIPP is quite comparable to that in the ISDP (Nelson, Bowie, and Walker, 1987) and the PSID (with the exception that, as noted above, the PSID experienced a larger sample loss at the first two waves). Attrition reduces the number of cases that are available for analysis— including the number available for longitudinal analysis over all or part of the time span of a panel and the number available for cross-sectional analysis from interview waves—and thereby increases the sampling error of the estimates. More important, the people who drop out may differ from those who remain in the survey. To the extent that adjustments to the weights for survey respondents do not compensate for these differences, estimates from the survey may be biased. The available evidence does suggest that people who drop out of SIPP differ from those who stay in the survey. Studies of nonresponse from the 1984 SIPP panel show that household noninterview rates after the first wave tended to be higher for renters, for households located in large metropolitan areas, and for households headed by young adults. Individuals who did not complete all of the interview waves, compared with those who did, tended to include more residents of large metropolitan areas, renters, members of racial minorities, children and other relatives of the reference person, people aged 15-24, never- married people, and people with no savings accounts or other assets (Jabine, King, and Petroni, 1990:35-37, Table 5.4). A recent analysis of attrition from the 1990 SIPP panel obtained similar results (Lamas, Tin, and Eargle, 1994). This study found that attrition was more likely to occur among

APPENDIX B 415 young adults, males, minority groups, never-married people, poor people, and people with lower educational attainment. In addition, more limited evidence suggests that the current noninterview weighting adjustments do not fully compensate for differential attrition across groups. One evaluation of the procedures to adjust for household nonresponse at each wave developed two sets of weights for Wave 2 households in the 1984 panel—one set based on all Wave 2 households and one set based just on those Wave 2 households that provided interviews at Wave 6. Comparing Wave 2 estimates from these two samples showed that the latter set produced higher estimates of median income and fewer households with low monthly income than those produced with the former set, evidence that the weights do not adequately adjust for higher attrition rates among low-income households (Petroni and King, 1988). A subsequent study that compared samples from the 1985 panel of all Wave 2 households and those that provided interviews at Wave 6 obtained similar findings (King et al., 1990). With regard to annual estimates of poverty from SIPP, one study (Lamas, Tin, and Eargle, 1994) found that the inclusion of people with missing waves, using an imputation process, produced somewhat higher poverty rates than the use of complete reporters. Approximately one-sixth of the difference between annual poverty rates in SIPP and the March CPS is apparently due to attrition bias. It is important to note that the current cross-sectional nonresponse adjustments in SIPP make only minimal use of the information that is available from previous waves for many current nonrespondents. Also, in constructing longitudinal files from SIPP panels, the Census Bureau assigns zero weights to original sample members who missed only one or a few waves in addition to those who missed all or most waves. The Census Bureau has recently committed itself to an intensive program of research to improve the weighting adjustments for attrition as part of the decision to move to 4-year panels for SIPP with no overlap (Weinberg and Petroni, 1992). Item Nonresponse In addition to household and person nonresponse, there is substantial item nonresponse in the March CPS. The Census Bureau imputes as much as 20 percent of the total income in the CPS. For some income sources, imputation rates are even higher—as much as one-third of nonfarm self-employment income, interest, and dividend payments are imputed (Bureau of the Census, 1989a: Table A-2; Bureau of the Census, 1992b: Table C-1). SIPP compares favorably with the March CPS on item nonresponse rates: overall, only 11 percent of total regular money income for 1984 was imputed in SIPP, compared with 20 percent in the March CPS. The SIPP and March CPS imputation rates for earnings were 10 percent and 19 percent, respectively;

APPENDIX B 416 for public and private transfers, 12 percent and 21 percent, respectively; and for property income, 24 percent and 32 percent, respectively (Jabine, King, and Petroni, 1990: Table 10.8; see also Citro and Kalton, 1993: Tables 3-4, 3-5 for comparisons of nonresponse rates for such specific income sources as AFDC and SSI). The imputation process maximizes the available sample size for analysis from a survey by providing filled-in records for respondents whose records would otherwise have to be discarded if key analytical variables were missing. However, the process can introduce error. No definitive evaluation has been conducted of the imputation procedures used in the March CPS or SIPP; however, available evidence suggests that the procedures are a source of error and could be improved. The Census Bureau currently applies very complex procedures, which it refers to as statistical matches, to impute values in the March CPS for whole groups of variables, such as income and employment-related items. The records are classified by a number of characteristics, and the record that is the best match is selected as the "donor" to supply the missing values to the record requiring imputation (the "host"). The Census Bureau's statistical matching procedures have, over the years, replaced somewhat less complex "hot-deck" imputations for more and more items. In the hot-deck method, the data records are arrayed by geographic area and processed sequentially, and the reported values are used to update matrices of characteristics. A record with a missing item has the most recently updated value assigned from the appropriate matrix. Hot-deck methods are largely used for imputation in SIPP. David et al. (1986) compared the Census Bureau's imputations of earnings in the March CPS with a regression-based imputation—using data from the Internal Revenue Service from a 1981 exact-match CPS-IRS file as the measure of truth—and found that the CPS methods performed quite well in reproducing the overall shape of the earnings distribution. However, they and other analysts have determined that the CPS imputations are less successful for small groups, such as minorities and specific occupations (Coder, no date: Lillard, Smith, and Welch, 1986). Coder (1991), in an exact match of the March 1986 CPS with IRS records for married couples with earnings, found that records with imputations for CPS earnings contributed significantly to the overall underestimate of wages and salaries in the CPS in comparison with the IRS tax returns. Thus, while mean CPS earnings in cases with no imputations were 98 percent of mean IRS earnings, mean CPS earnings in cases with imputations were only 89 percent of mean IRS earnings. Also, while 95 percent of cases with no imputations had CPS earnings within 1 decile of IRS earnings, only 66 percent of cases with imputations were in this close agreement. The available evidence suggests that the SIPP imputation procedures could also be improved. Several studies have focused on the population eligible for assistance programs and have identified problems because the current procedures

APPENDIX B 417 do not take low-income or receipt of program benefits into account in imputing program-related variables. Doyle and Dalrymple (1987) found that the imputation of income in the 1984 SIPP panel for households reporting receipt of food stamps produced a larger proportion of such households with high monthly incomes that would make them ineligible for Food Stamp Program benefits than households that reported both their cash income and food stamps. Allin and Doyle (1990) compared program participants from the 1984 SIPP panel whom they simulated to be eligible for food stamp benefits with participants whom they simulated to be ineligible because of excessive incomes or asset holdings: they found that only 5 percent of the eligible participants but 28 percent of the ineligible participants had some or all income or asset values imputed. Coder (1992b), in an exact match of the 1990 SIPP panel with IRS records for married couples with earnings, found results similar to the 1986 CPS-IRS exact-match study reported above. Records with imputations for SIPP earnings contributed significantly to the overall underestimate of wages and salaries in the SIPP in comparison with the IRS tax returns. Thus, while mean SIPP earnings in cases with no imputations were 94 percent of mean IRS earnings, mean SIPP earnings in cases with imputations were only 85 percent of mean IRS earnings. Also, while 88 percent of cases with no imputations had SIPP earnings within 1 decile of IRS earnings, only 75 percent of cases with imputations were in this close agreement. Other Sources of Error A number of other error sources have been identified in the March CPS and SIPP, particularly with regard to poverty and related income statistics, although no definitive results are available on their effects. The CPS, like other surveys with a rotation group design, is subject to rotation group bias, in that respondents who are newer to the survey give different responses than do respondents who have been in the survey for a longer period. For example, the unemployment rate estimated for households in the incoming CPS rotation group each month is 7 percent higher than the average for all eight rotation groups (Bailar, 1989: Table 6). There has been no analysis of how rotation group bias might affect poverty and income estimates from the March supplement. Reporting errors, as distinct from nonresponse, are also a potential problem. Very few record checks that compare survey reports with independent sources (e.g., tax or program records) for the same people have been conducted for the March CPS. Coder (1991) conducted such a record-check study in his 1986 exact-match CPS-IRS analysis. He noted that the net CPS aggregate underestimate of 2-3 percent masked widespread over- and underreporting of amounts and that the imputation procedures did little to correct

APPENDIX B 418 the bias from nonresponse. Despite these errors, the CPS distribution of earnings was very similar to that derived from the IRS. The most serious error problems were concentrated at the bottom and top of the distribution. Estimates of poverty and income from the March CPS are affected by the fact that the sample comprises persons present at the March interview who are asked about income in the preceding calendar year. Thus, income from people who died during the year or otherwise left the survey universe is missed entirely (this is not true for SIPP). Also, family composition is measured as of the March following the income reference year, and no information is obtained about intrayear changes in composition. For example, two people found to be married as of March will be classified as a married couple for the entire income reference year and assigned the combined income of both spouses for that year. However, this treatment is misleading, with regard to classification both by family type and by income level, if, in fact, the couple's marriage took place after the start of the income year. The limited available evidence suggests that annual poverty rates in the CPS may be biased upwards to some extent by the mismatch of family composition and income (see Czajka and Citro, 1982; Williams, 1987; see also Lamas, Tin, and Eargle, 1994). In SIPP, researchers have looked at the equivalent of rotation group bias, namely time-in-sample or conditioning effects. As a panel progresses, respondents may acquire new knowledge that affects their behavior: for example, they may apply for benefits from government assistance programs as a direct consequence of learning about such programs from the survey. They may also gain experience with the questionnaire that leads them to give either less accurate or more accurate answers than in earlier interviews. However, studies conducted with SIPP to date suggest that conditioning effects are scattered and of limited effect (see, e.g., Pennell and Lepkowski, 1992). Some record-check studies have been conducted with SIPP, including the 1990 SIPP-IRS exact match (Coder, 1992b). Marquis and Moore (1990a, 1990b) carried out a record-check study that matched SIPP records in four states from the first two waves of the 1984 panel with records from eight state and federal programs (AFDC, food stamps, unemployment insurance, worker's compensation, federal civil service retirement, Social Security, SSI, and veterans' pensions and compensation). The results showed negatively biased participation rates for most programs: that is, net underreporting of participation, although there were overreports as well as underreports. For most programs, there appeared to be relatively little bias in reporting of benefit amounts for those who correctly reported their participation. In one state, a large proportion of AFDC recipients incorrectly reported their benefits as general assistance. One problem identified in SIPP and other longitudinal surveys is the "seam" phenomenon, in which respondents are more likely to report changes

APPENDIX B 419 (e.g., going off or on a welfare program) between pairs of months that span two interviews (e.g., for SIPP, months 4-5, 8-9, 12-13, etc.) than between pairs of months for which data are collected from the same interview. The seam problem affects most variables for which monthly data are collected in SIPP— often strongly. For example, in the first year of the 1984 SIPP panel, over twice as many nonparticipants reported entering the Social Security program between seam months than nonseam months (Jabine, King, and Petroni, 1990: Table 6.2). The reasons for the occurrence and extent of the seam phenomenon are not well understood, but it clearly results in errors in the timing of transitions in SIPP and the duration of spells of program participation (and perhaps of poverty). It may or may not result in errors in the number of transitions that occur within a given period. For example, in the case of food stamps, total exits and entrances from SIPP are close to the rates derived from food stamp administrative records. In contrast, whether due to the seam effect or other factors, entrance rates from SIPP for SSI are significantly higher than those shown by program records (Jabine, King, and Petroni, 1990:59-60). The Census Bureau has pursued research and testing of alternative questionnaire designs and interviewing procedures that could reduce the seam problem and produce more accurate income reporting overall (see, e.g., Marquis, Moore, and Bogen, 1991). To date, there have been few positive results. Aggregate Comparisons Aggregate comparisons of income estimates from SIPP and CPS, like comparisons of internal indicators of data quality, show a mixed picture. On balance, SIPP seems to be doing a somewhat better job of income reporting, but not for all income types. Moreover, it may be that the gains in SIPP are not holding up over time. Comparisons of 1984 estimates from the 1984 SIPP and March 1985 CPS showed SIPP as a percentage of CPS as follows (Jabine, King, and Petroni, 1990: Table 10.8): Total money income 100.1 Regular money income 99.9 Earnings 98.2 All other 106.0 Public and private transfers 111.6 Property income 103.1 All other regular money income 37.0 Lump-sum payments N.A. (not collected in CPS) SIPP performed better than the March CPS with the notable exception of earnings. (The low ratio for all other regular money income is presumably due to higher levels of reporting of specific income types in SIPP than in the

APPENDIX B 420 March CPS.) Census Bureau analysts assume that many SIPP respondents are reporting their net paychecks rather than their gross earnings as requested by the survey. Coder and Scoon-Rogers (1994) reported comparisons for detailed income sources for 1984 and 1990. These comparisons indicate that some of the gains in income reporting seen in SIPP at the outset of the survey may no longer be occurring. However, they noted that the 1990 SIPP panel may not be comparable to the 1984 panel because it contained an added sample, carried over from the 1989 panel, of households headed by single mothers and minorities. The weighting adjustments for these added cases may be problematic. As with the review of internal indicators of data quality, it is difficult from the available comparisons of aggregates to draw conclusions about the implications for estimates of poverty and related income statistics. Perhaps the most telling summary indicator available is the fact, noted above, that SIPP poverty estimates are consistently several percentage points below those from the March CPS. Lamas, Tin, and Eargle (1994) found that only about one-sixth of this difference could be explained by attrition bias in SIPP. Another one- sixth of the difference appears due to more accurate measurement of family composition during the income reporting year in SIPP than in the March CPS. The remaining two-thirds difference, it is hypothesized, is explained by more complete reporting of income in SIPP for the lower end of the income distribution. In that regard, respondents to SIPP report more sources of income than respondents to the March CPS; they also report higher amounts for such income sources as Social Security, Railroad Retirement, SSI, unemployment compensation, veterans' payments, and child support payments, all of which are important to the low-income population. However, reporting of AFDC and other cash welfare is currently no more complete in SIPP than in the March CPS (Coder and Scoon-Rogers, 1994: Table 1). Clearly, much more analytical work needs to be done, including work to look at differences in income reporting among population groups within and across the surveys and the development of a complete time series of poverty and related income statistics from SIPP for comparison with the March CPS.

Next: APPENDIX C The Interdependence of Time and Money »
Measuring Poverty: A New Approach Get This Book
×
Buy Paperback | $75.00 Buy Ebook | $59.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Each year's poverty figures are anxiously awaited by policymakers, analysts, and the media. Yet questions are increasing about the 30-year-old measure as social and economic conditions change.

In Measuring Poverty a distinguished panel provides policymakers with an up-to-date evaluation of:

  • Concepts and procedures for deriving the poverty threshold, including adjustments for different family circumstances.
  • Definitions of family resources.
  • Procedures for annual updates of poverty measures.

The volume explores specific issues underlying the poverty measure, analyzes the likely effects of any changes on poverty rates, and discusses the impact on eligibility for public benefits. In supporting its recommendations the panel provides insightful recognition of the political and social dimensions of this key economic indicator.

Measuring Poverty will be important to government officials, policy analysts, statisticians, economists, researchers, and others involved in virtually all poverty and social welfare issues.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!