National Academies Press: OpenBook

Envisioning the 2020 Census (2010)

Chapter: Appendix A: Past Census Research Programs

« Previous: 4 Revitalizing Census Research and Development
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

–A–
Past Census Research Programs

The descriptions of census research in this section generally exclude operations and analyses directly related to census coverage measurement—matching of the results of an independent postenumeration survey to census records in order to estimate undercount and overcount. We do, however, try to describe some of the formative work along these lines in the 1950 and 1960 censuses, given the novelty of the approach in those counts. Furthermore, although these descriptions do describe experiment and evaluation work related to census content that was part of the long-form-sample questionnaire prior to 2010, we exclude some work related to areas that are now fully out of scope of the decennial census (in particular, the census of agriculture).

A–1
1950 CENSUS

A–1.a
Principal Pretests and Experiments Conducted Prior to the Census
Supplements to the Current Population Survey (CPS)
  • March 1946 (CPS areas, nationwide): test of collection of both current residence (where the interviewer found a respondent at time of interview) and usual residence (the CPS and decennial census standard) information; provided particular information on enumeration of non-residents staying with households and college students.

  • April 1948 (CPS areas, nationwide): tests of method of obtaining income data and of enumeration of people by both usual and current

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

residence. The test resulted in determination of pattern for asking income questions in 1950 as well as decisions (confirming residence rules) on enumeration of nonresidents and college students.

  • May 1948 (CPS areas, nationwide): test of questions on physical characteristics of dwellings; led to revised definition of “dwelling unit.”

Experiments Conducted in Special Censuses or Other Surveys
  • April 1946 (Wilmington, NC): experiment conducted as part of special census, focusing on collection of both current and usual residence information. Changes to enumerator training and questions on general population characteristics were also tested. The test resulted in first draft of population questions for 1950.

  • February 1948 (Washington, DC): experiment conducted as part of survey conducted for National Park and Planning Commission, Bureau of Labor Statistics, and Housing and Home Finance Agency. The experiment focused on questions on income and led to revision in the format of the schedule, specific questions, and instructions.

  • May 1948 (Little Rock and North Little Rock, AR): experiment conducted as part of special census on self-enumeration techniques; test resulted in information on possible response rates and comparative costs.

  • June 1948 (Philadelphia, PA): experiment attached to survey conducted for Interdepartmental Subcommittee on Housing Adequacy on methods of measuring housing quality; test resulted in revised definition of “dilapidation.”

  • March 1949 (Chicago, IL, and adjacent counties): experiment conducted as part of Chicago Community Survey on rostering and obtaining complete enumeration of persons in households. Significantly, the experiment tested the use of a household questionnaire rather than either the master ledger-size schedule then used for census interviewing or individual person questionnaires.

  • June 1949 (Baltimore, MD): test to check the quality of reported housing data conducted as part of survey conducted for Baltimore Housing Authority; test led to some revision of housing questions.

Tests of Specific Phases or Operations of 1950 Censuses
  • May 1947 (Altoona, PA; Charlotte, NC; Cincinnati, OH; Louisville, KY): test on “document sensing,” or a format for the schedule to enable cards to be punched automatically; test indicated that technique was possible.

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  • January 1948 (6 southern counties): test of special Landlord-Tenant Operations Questionnaire led to revision of procedures.

  • August 1949 (33 field offices): test of alternative population and housing schedules provided input to determination of final schedule (questionnaire) for 1950.

  • September 1949 (Puerto Rico): test of population, housing, and agriculture questions for enumeration in Puerto Rico.

  • October 1949 (Raleigh and Roxboro, NC): test of training procedures led to determination of final training plan.

  • November 1949 (Raleigh, NC): test of questions on separate Survey of Residential Financing to be conducted simultaneously with the 1950 census.

  • January 1950 (Chicago, IL): test of Survey of Residential Financing questions helped determine final procedures.

Dress Rehearsals
  • April–May 1948 (Cape Girardeau and Perry Counties, MO): test intended to (1) compare quality of data obtained from a schedule pared down to very few questions to one with many questions; (2) collect both current and usual residence information; and (3) generally assess quality of data from new questions. The pretest led to the conclusion that the short schedule yielded no substantial gains in quality over the longer, more-questions instrument. It also helped refine residence rules for some census types and specification of the duties of enumeration crew leaders.

  • October 1948 (Oldham and Carroll Counties, KY; Putnam and Union Counties, IL; Minneapolis, MN): test focused on different enumeration procedures (self-enumeration, distribution of materials by post office, etc.) and their effects on data quality. Based on the test, the Census Bureau decided to use self-enumeration in the Census of Agriculture. The test also yielded cost, time, and quality data for the different approaches.

  • May 1949 (Anderson City, School District 17, and Edgefield County, SC; Atlanta, GA; rural areas near each of 64 CPS field offices): test of training methods and final questionnaires and training procedures led to some modification in procedures (including procedures for shipping supplies to local offices) and determination of procedures for the postenumeration survey.

SOURCE: Adapted from U.S. Census Bureau (1955:Table B, p. 6), with additional information from Appendix E of that document.

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
A–1.b
Research, Experimentation, and Evaluation Program

Although the 1950 census was the first to include a structured experimental and evaluation program (Goldfield and Pemberton, 2000a), details of the precise evaluations conducted in 1950 are not generally available. Only slightly more information about the shape of specific experiments is detailed in the Census Bureau’s procedural history for the 1950 census. In a short section titled “Experimental Areas,” that history notes (U.S. Census Bureau, 1955:5):

A number of variations in the procedures for collecting data were introduced in ten District Offices. These variations made possible a comparison of procedures under actual census conditions. The experimental areas were located in Ohio and Michigan. In six of these districts, the alternative procedures involved the use of a household schedule (instead of a line schedule for a number of households), of the household as a sampling unit (instead of the person), and of self-enumeration (instead of direct enumeration). In four of the districts, assignments were made to enumerators in such manner that the variation in response could be studied in terms of enumerator differences.

Remarking on the self-enumeration portion of the 1950 experiments, the procedural history for the 1960 census clarifies that the test areas were in Columbus, OH, and Lansing, MI, and that part of the experiment requested that individual respondents complete and mail back (on Census Day) the questionnaires left with them by field enumerators (U.S. Census Bureau, 1966:292). The impact of the 1950 experiment on the later adoption of mailout-mailback methodology in the census is discussed by Bailar (2000).

A–2
1960 CENSUS

A–2.a
Principal Pretests and Experiments Conducted Prior to the Census
Supplements to the Current Population Survey (CPS)
  • November 1958: test of definitions for unit of enumeration.

  • April 1959: test of farm definitions.

Experiments Conducted in Special Censuses or Other Surveys
  • March–April 1957 (Yonkers, NY): major experiment conducted as part of special census. The experiment had two principal objectives: (1) obtaining data on the cost of a two-visit interview process (and effects on data quality) and (2) testing direct entry of interview information on machine-readable forms, either by enumerators or household respondents. In particular, a two-interview approach was tested in which—for roughly every fourth household—a limited amount of informa-

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

tion was collected in a first interview and a long-form questionnaire was left for completion (and eventual pickup by an enumerator). The test demonstrated that enumerators could readily use the computer-readable schedules in the field but that respondents found them more difficult to follow. The two-visit approaches were found to be very expensive because of the difficulty of finding household members at home when enumerators attempted their visits. However, the idea of independent listing of dwelling units by crew leaders as well as enumerators was found to have potential value for improving coverage. The computer-readable forms generated by the test also helped with debugging the collection and tabulation routines. The Yonkers test also gave the Census Bureau the chance to test a new question on address of place of work, but the Bureau found the results to be unsatisfactory.

  • October 1957 (Indianapolis, IN): focused test of several possible coverage improvement techniques in one postal zone as part of a special census. Building on the Yonkers test, one approach compared results of a recheck of listed addresses by crew leaders with enumerators’ original results. Another technique involved preparing postcards based on enumerator interviews, with the postal carriers instructed to report boxes for which there were no cards for further verification (deliberately withholding a small sample of cards as a test of the carriers). Finally, the Bureau tested the completeness of lists of old-age assistance beneficiaries, juvenile delinquents, and other persons believed to live in low-income families obtained from local authorities, as well as the distribution of census forms in public schools for parents to complete. The tests provided evidence of coverage gains from the post office check and independent crew leader listings and suggested that the locally provided special lists were useful in indicating missed persons.

  • October 1957 (Philadelphia, PA): census experiment focused on wording and placement of questions on labor force status, address of place of work, and date of marriage. Three different schedules were used in the experiment, which deliberately sent inexperienced interviewers to do initial questioning and trained CPS interviewers for verification interviews. One tested approach for the place of work question asked respondents to identify the location on a map of the city. The test resulted in revisions of the occupation and industry questions in the final census schedule.

  • November 1957 (Hartford City, IN): test of mailout census methodology as part of a special census. A four-page Advance Census Report was mailed to individual household addresses; respondents were asked to complete the form but not to mail it back, holding it for an enumerator’s visit instead. In the test, about 40 percent of households had completed part or all of the questionnaire prior to the visit, and the

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

advance questionnaire seemed to be particularly useful in improving the quality of data on value of home.

  • January 1958 (Memphis, TN): test of self-enumeration and mailback methodology as part of special census. Housing units were listed by enumerators and questionnaires distributed, with instructions to complete and mail on the census date. Mailed returns were chosen for follow-up verification, some by telephone and others by trained CPS interviewers; the post office check used in the Indianapolis test was also used for portions of Memphis. Part of the Memphis test also experimented with collection of information on visitors present in households on the census date and using reported usual residence information to try to allocate them to their usual home. Based on this test, a process for collecting “usual home elsewhere” information for some transient populations (enumerated where they are found on Census Day) was used in the 1960 census and in group quarters enumeration in subsequent censuses. Finally, the experiment asked some enumerators to query respondents for some information on neighboring living quarters (above or below, right or left, in back or in front); the technique was subsequently adopted as part of the evaluation process of the 1960 census.

  • February 1958 (Lynchburg, VA): further testing of mailout of Advance Census Reports as part of special census. As in the Memphis test, CPS interviewers and crew leaders were used to reinterview some households to verify the quality of mailback data. Attempts were also made to time interviews to determine whether the Advance Census Report reduced the time spent in enumerator interviewing. In May 1958 enumerators returned to Lynchburg to recheck and verify the quality of housing data collected in February.

  • March 1958 (Dallas, TX): census experiment on alternative questionnaire forms. Two small samples of households were interviewed using different schedules, specifically trying different methods for collecting information on income and place of work, on housing equipment items (e.g., type of heating and presence of air conditioning), and a 5-year versus 1-year migration question.

  • April, June 1958 (Ithaca, NY): test conducted as part of special census (with staff follow-up) on enumerators’ ability to classify types of living quarters.

  • October 1958 (Martinsburg, WV): test intended to determine which of three alternative population and housing questionnaires could be used most effectively in the field. The content of the questionnaires was identical, and essentially identical to the questions that would be used in the 1960 census, but the alternative schedules varied the size and structure of the schedules. The Martinsburg test also delivered

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

Advance Census Reports prior to enumerator visits. Because the test was sufficiently close to the actual census, the test also permitted the Bureau to evaluate the training materials planned for use in 1960 as well as editing and coding routines.

  • December 1958 (Philadelphia, PA): limited pretest to compare two possible forms with different skip patterns, after a final decision to use some self-enumeration in the 1960 census.

  • Mid-1959 (800 households in 10 regional headquarters cities): small-scale test following the full-dress exercise in North Carolina (described below), testing variants of some questions to determine whether adding check boxes to routing questions (e.g., “If you do not live in a trailer, check here and continue with the next question.”) promoted fuller response. Questionnaires were also distributed with return envelopes for mailback.

“Informal” Experiments and Pretests for the Census of Housing
  • June 1957 (Washington, DC): test of procedure for listing structures.

  • October 1957 (12 standard metropolitan areas): test of collection of data on condition of housing unit in selected 1956 National Housing Inventory segments.

  • March 1958 (New York, NY): test of classification of living quarters; used to formalize 1960 census definition of housing unit based on separate entrance or separate cooking equipment.

  • March, May 1958 (Prince George’s County, MD): tests of questions on exterior materials of housing and on basement shelters.

  • June 1958 (Lynchburg, VA): recheck of data on condition and housing unit from May 1958 pretest.

  • September 1958 (Port Chester, NY): recheck of data on classification of living quarters.

Dress Rehearsal
  • February–March 1959 (Catawba and Rutherford Counties, NC): final full-scale test prior to the census, making use of the Advance Census Report delivery that would be used in the 1960 count. Households chosen for the long-form sample were asked to complete their questionnaires and mail them to their local census office.

SOURCE: Adapted from U.S. Census Bureau (1966:App. B), with supplemental information from Part III, Chapters 1 and 2 of that document.

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
A–2.b
Research, Experimentation, and Evaluation Program

A long list of possible research studies—based, in part, on input from the Panel of Statistical Consultants (an advisory group to the Census Bureau’s assistant director for research and development)—for the 1960 census was refined to a final list of 22 studies. The Bureau divided these 22 studies into 8 “projects.”

  1. Measurement of response variability

    • An expanded version of a study performed in 1950 to study variability in response due to enumerators and census staff; the 1960 version of the “Response Variance Study I” drew sample from all areas where questionnaires were mailed (with enumerators sent to pick them up or conduct interviews), rather than four selected areas as in 1950. The sample for which crew leaders and enumerators were assigned in order to estimate these effects included about 320,000 housing units and 1,000,000 persons.

    • A follow-up study, called “Response Variance Study II,” designed to measure variability in response due to the respondents themselves. A sample of 5,000 households from Response Variance Study I was drawn and enumerators sent to conduct interviews; a second sample of 1,000 housing units were asked to report again using a mailed, self-response questionnaire (with mailback, and interviewer follow-up if necessary).

    • “Response Variance Study III” looked at variability due to coding and data entry. For a one-fourth sample of households for which data was collected in Response Variance Study I, photocopies of the enumeration books were made; pairs of coders then independently coded and transcribed the data.

  1. Reverse record checks (undercoverage in general population): a check to see whether persons in an independent sample were enumerated in the 1960 census, in which the independent sample was culled from a mix of past census records and administrative data. Specifically, the sample was constructed from samples from four sources: (1) persons enumerated in the 1950 census; (2) immigrants and aliens registered in January 1960 with the Immigration and Naturalization Service; (3) birth records for children born between the 1950 and 1960 censuses; and (4) persons who were found in the 1950 postenumeration survey but not in the 1950 census.

  2. Reverse record checks II (undercoverage in specific groups): a check similar to Project B, except that the records-based sample consisted of Social Security beneficiaries and students enrolled at colleges or universities.

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  1. Reenumerative studies of coverage error

    • Postenumeration survey based on an area sample: a sample of “segments” that had been independently canvassed by enumerators for the Survey of Components of Change and Residential Finance (SCARF), conducted alongside the census. Provided with both 1960 census and SCARF information, enumerators in 2,500 area segments were tasked to search for omitted housing units or structures mistakenly labeled as housing units. About 10,000 housing units were administered a detailed housing questionnaire.

    • Postenumeration survey based on a list sample: About 15,000 living quarters (both housing units and group quarters) already enumerated in the censuses was drawn, representing about 5,000 clusters of about 3 units each, dispersed across 2,400 enumeration districts. For each unit, a detailed reinterview was conducted in order to list persons within living quarters (this follow-up interview was conducted in early May 1960, so that not much time had passed since the April 1 Census Day). Enumerators were also asked to list “predecessor” and “successor” housing units or group quarters (e.g., neighboring units along a specified path of travel) in order to further check on missed housing units.

  1. Measurement of content error in data collection

    • Reinterview of about 5,000 households in the long-form sample for the 1960 census, using specially trained enumerators: a first phase (about 1,500 households in July 1960) sent the enumerators to conduct blind reinterviews; in the second phase (October 1960), dependent interviewing using the household’s reported 1960 census information was conducted for half of the remaining sample and for the other half blind interviewing, as in the first phase.

    • Reenumerative study focusing on housing characteristics, administering a battery of housing characteristics: about half of the 10,000 housing units in the study received the short form in the 1960 census, and the others were in the long-form sample (and hence had already reported some of the housing items).

    • Match of records from the 1960 census long-form sample and the Current Population Survey’s March or April 1960 samples.

    • Match of respondent-provided occupation and industry information with data collected directly from employers.

    • Match of about 10,000 sampled Internal Revenue Service returns to census records (although only about one-fourth of these were studied in depth regarding the consistency of reported income).

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  1. Studies of processing error

    • A September 1963 detailed review of a sample of individually filled Advance Census Reports, Household Questionnaires (enumerator schedules), and the coded computer-readable forms in order to estimate transcription and other response errors.

    • Coding error study in which a set of long-form questionnaires were separately coded by three coding clerks but only one was the designated “census coder” whose work went onto the final questionnaire.

    • Study of the accuracy of automated editing rules in the microfilm–computer system for data collection and tabulation.

  1. Analytical studies: general studies of census quality and coverage, including comparison of census counts with demographic analysis estimates.

  2. Post Office coverage improvement study: postcensus postal check making use of postal employees and resources. A sample area of 10,000–15,000 housing units was selected in each of the 15 postal regions in the continental United States. These areas were matched to census enumeration districts. Postal carriers were asked to review name and address cards completed by enumerators for every counted household, making new cards for households on their routes that were not included in the census.

SOURCE: Adapted from U.S. Census Bureau (1963).

A–3
1970 CENSUS

A–3.a
Principal Pretests and Experiments Conducted Prior to the Census
Tests Conducted as Part of Special Census or Other Survey
  • August 1961 (Fort Smith, AR): test conducted as part of special census to compare an address register compiled through enumerator visits with a register based on 1960 census records, new building permits, and a postal check. The separate listings were “found to be just as complete.” Enumerator-visited households were also left with a brief questionnaire and asked to return the form by mail.

Pretests of Census Operations and Questionnaires
  • June 1962 (Fort Smith, AR, and Skokie, IL): tests of address list updating using building permits and postal checks (updating the Fort Smith list from the August 1961 special census and repeating the methodology in Skokie). Short questionnaires were mailed to all households for response by mail, yielding 71–72 percent return rates.

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  • April 1963 (Huntington, NY): further testing of address listing and mailout-mailback methods, now focused on a larger city—deemed to be a rapidly growing area and including a mix of urban, suburban, and rural housing types. The test also included administering a long-form questionnaire to approximately 25 percent of households. In nonmail delivery areas, address lists were built through enumerator canvass and questionnaires deposited at the households for mailback.

  • May–June 1964 (Louisville, KY, Standard Metropolitan Statistical Area [SMSA]): based on the Fort Smith, Skokie, and Huntington experiences, the Census Bureau requested and received funds for larger-scale testing of mailout-mailback methods (experimental censuses) in 1964 and 1965. The first of these was to be conducted in an area of about 750,000 population with a large central city; the Louisville area was selected (with the test being conducted from a special census office in Louisville, rather than the Bureau’s nearby processing center in Jeffersonville, IN). The Louisville test added separate listing and contact strategies for “special places” (mainly group quarters), based on input from a task force on difficult-to-enumerate areas that had been convened in Louisville in late 1963. In this test, the Census Bureau concluded that a computer-generated address register based on previous census records and building permits was more complete than listing books completed by enumerators. In particular, “using a list based on enumerator canvass did not seem the ideal way to insure complete coverage in multiunit structures in city delivery areas.” The Bureau further concluded “that at the current state of development of procedures the cost of a mail census, including certain important improvements over the 1960 approach, would not exceed the cost of a census by enumerator canvass.”

  • April 1965 (Cleveland, OH): Cleveland was chosen as the site for the second experimental mailout-mailback census both for its big-city nature and for its perceived enumeration problems in the 1960 census. For this test, the Census Bureau experimented with using a commercial mailing list as the base for the address register; the test was also the first to be geocoded by computer based on an address coding guide (i.e., coded information on address ranges on odd and even sides of street segments). As in Louisville, both short- and long-form questionnaires were distributed. The Cleveland test also centralized editing procedures, assigning completeness checks of returned questionnaires to district office clerks rather than individual enumerators. However, the Bureau’s attempts to predesignate hard-to-enumerate areas were deemed to be less successful than was the case in Louisville, and probe questions added to the questionnaire on other households at the same street address were generally found to be unclear and confusing. Fur-

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

thermore, an attempt to test the effectiveness of rotating district office staff through different operations was also deemed unsuccessful, as many clerks balked at performing fieldwork as enumerators. Together, the Louisville and Cleveland tests convinced the Bureau that conducting at least part of the 1970 census by mailout-mailback was viable.

  • May 1966 (St. Louis Park, MN, and Yonkers, NY): the first “content pretest” conducted principally by mail with little or no field follow-up. Questions on native tongue and national origin were of particular interest, and the two sites were chosen accordingly (predominantly Scandinavian origin in St. Louis Park and wide ethnic diversity in Yonkers). About 2,500 households in each area were drawn directly from 1960 census records to facilitate comparison of an “occupation six years ago” question with the reported occupation in the 1960 census. Editing and some follow-up rechecks were performed from district offices. The test questionnaires asked respondents for Social Security numbers, which were then compared with Social Security Administration records and generally found to be accurate.

  • May 1966 (national sample): test of two alternative formats of a mail questionnaire (one presenting questions for each person on two facing pages and the other using a more traditional columnar format), administered to a national sample of about 2,300 housing units.

  • January–October 1966 (Wilmington, DE, SMSA): further test to compare completeness of coverage of address registers compiled from commercial sources and postal checks with those formed by field canvassing. In different phases, the work in Wilmington focused on rural and locked-box addresses (comparing a January post office list with a field canvass in April–May) and city delivery addresses (having postal carriers check a commercial list in March, and sending enumerators to field verify “undeliverable” addresses in October).

  • January 1967 (Meigs, Morgan, Perry, Vinton, and Jackson Counties, OH): test of alternative strategies for address canvassing, either knocking on every door or knocking only when necessary to verify the address or existence of housing units. The two approaches were eventually judged to have about equal effectiveness, but it was reasoned that this may be due to the preponderance of single-family structures (and that the knock-only-when-necessary approach might be problematic in areas with multiunit structures).

  • March 1967 (Memphis, TN): test comparing intensive “blitz” enumeration by teams of enumerators with normal enumeration and interviewing procedures in a 14-tract (about 25,000 population) area characterized by low-income and deteriorating housing.

  • March 1967 Content Pretest (Gretna, LA): test of employment questions (revised for compliance with new federal government standards)

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

in two census tracts, chosen for having a high unemployment rate and above-average proportion of substandard housing. New items on mobility, marital history, birth dates of children ever born, and disability were also included on test questionnaires. The Census Bureau also received a list of persons arrested during the study period from the Gretna Police Department in an attempt to see if they could be matched to the pretest records, but such matches were generally unsuccessful.

  • April 1967 (New Haven, CT, SMSA): further test of mailout-mailback census techniques, building on the Louisville and Cleveland tests, particularly focused on centralized office operations (as in Cleveland) and adding coverage improvement operations to field follow-up work. New questionnaire content, including Social Security number and vocational training, was also included on the test questionnaires. The qualifier “ever married” was dropped from the question on children ever born, which yielded surprisingly little reaction; however, the Bureau concluded that “New Haven respondents objected to the number of questions on bathroom facilities in particular, and the large number of sample questions in general.” The address coding guide technique for geocoding results was further refined for the New Haven test, and a commercial mailing list (with prelisting or canvassing of rural delivery addresses) was the base for the address register. Among the coverage improvement techniques tried in this test were soliciting lists of people likely to be missed from community organizations and a “movers check” focused primarily on people who moved during the months before and after Census Day.

  • May 1967 Questionnaire Format Test (national sample): test of four different designs of mailing pieces (e.g., foldout sheets versus booklets, variations in question formatting) administered to 4,900 urban housing units in a national sample. The version of the questionnaire modeled on that used in the New Haven experiment was judged to be most practical for both respondents and the Census Bureau.

  • August 1967 (Detroit, MI): canvass of about 800 addresses (on 450 postal routes) in buildings with at least two housing units, intended to get a sense of variation in apartment numbering styles or, if unnumbered, what kind of locational labels might be used to describe individual units.

  • September 1967 (North Philadelphia, PA): mailout exercise (using short- and long-form questionnaires) in two inner city tracts (about 21,000 population in total) where population was suspected to have dropped significantly. The test also involved recruiting temporary enumerators familiar with the area to collect questionnaires and look for missed buildings and living quarters. In particular, an attempt was

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

made to use high school students as enumerators (with school counselor supervision), but this was found to involve numerous administrative problems (and encountered some resistance by respondents).

  • October 1967 (Kalamazoo, MI, SMSA): focused study of address list development in “fringe” areas where city delivery and rural service were mixed. About 16,500 such housing units were found in the Kalamazoo area in this test, with about 20 percent being addresses that could be found on city-delivery listings and in the enumerator canvasses.

  • May 1968 Housing Quality Study (Austin, TX; Cleveland, OH; San Francisco, CA): 300 housing units in each of three cities were interviewed and separately rated by American Public Health Association inspectors in order to assess a revised definition of “substandard housing.”

  • August 1968 Subject Response Study (CPS areas, nationwide): specific test of an “occupation 5 years ago” question, reinterviewing about 2,800 households from a 1963 CPS sample.

Dress Rehearsals
  • May 1968 (Dane County, WI): dress rehearsal of all major census processes using a decentralized mail census system, under which receipt, edit, control, and follow-up of questionnaires returned by mail would be distributed among enumerators in local offices.

  • May 1968 (Sumter and Chesterfield Counties, WI): dress rehearsal of all major census operations using traditional nonmail techniques (Advance Census Reports were mailed to households, followed by enumerator visits to pick up questionnaires or conduct interviews).

  • September 1968 (Trenton, NJ): dress rehearsal of all major census processes using a centralized mail census system, under which receipt, edit, and control of questionnaires returned by mail would be managed by clerks in census district offices. Furthermore, enumerators would visit only those households for which follow-up could not be carried out by telephone. A fourth dress rehearsal focused on hard-to-enumerate areas was originally planned for March 1969, but budget constraints forced some of these plans to be folded in with the Trenton dress rehearsal.

SOURCE: Adapted from U.S. Census Bureau (1976:Chap. 2).

A–3.b
Research, Experimentation, and Evaluation Program

The 1970 Census Evaluation and Research Program consisted of 25 individual projects, falling under five broad headings:

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  • Evaluation of coverage of persons and housing units

    • Demographic analysis: comparison of census totals with population estimates based on previous census, birth and death registration, and estimates of migration, as done in 1950 and 1960.

    • Medicare record check: conducted in support of the demographic analysis results, which depend in part on Medicare registration rolls as an auxiliary data source for older age groups. A sample of about 8,500 people age 65 or over was drawn from Social Security Administration records and matched to 1970 census returns to estimate missed persons.

    • Birth registration study: a second project carried out in direct support of demographic analysis estimation. A sample of about 15,000 children born between 1964 and 1968 was derived from household interview records from the Current Population Survey and from the weekly Health Interview Survey conducted for the U.S. Department of Health, Education, and Welfare by the Census Bureau. The compiled vital statistics records of the National Center for Health Statistics—themselves drawn from birth and death certificate data from state and local registration areas—were then searched to try to find the sample children, with field follow-up visits if no birth record data could be found.

    • Housing unit coverage in mail areas: two-part study of the completeness of the address register used for the 1970 census mailing. In the first phase, permanent Census Bureau survey interviewers were tasked to inspect and visit a sample of 20,000 addresses from the mailing list to determine the number of living quarters at each address. In the second, survey enumerators were asked to relist about 8,000 city blocks (or similar-size equivalents in noncity areas).

    • Study of housing unit occupancy status and of census deletes: Interviews and information from the housing unit coverage study described above were also analyzed to try to sort out reasons for housing units erroneously marked as vacant, and the extent to which errors were caused by households maintaining two places of residence or moving during the census period.

    • Current Population Survey–Census match: match of CPS records collected during the week of March 19, 1970, to census returns in order to estimate the gross number of missed housing units at the overall level (not just the mailout areas, as in the housing unit coverage study). (The matched CPS–census records were also used to evaluate the quality of consistency of common data items, under the “measurement of content error” heading below.)

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  • Definitional errors in housing unit count: this study was intended to measure gross and net definitional error, defined as instances in which the occupants of multiple housing units were combined and counted as one household or the occupants of a single housing unit were split and counted as multiple households. Samples of about 140,000 questionnaires from mailout-mailback areas and 70,000 from enumerator-collected areas were clerically reviewed to identify households with potential for such errors, and field reinterviews scheduled with a subsample of those flagged cases.

  • Special procedures for hard-to-enumerate areas: review and assessment of five specific coverage improvement programs implemented for the 1970 census: (1) movers operation, which asked post offices for 20 large cities for change-of-address information for people who moved a month before and after Census Day; (2) precanvass, a final precensus review of census mailing lists for the inner-city areas of the largest metropolitan areas; (3) “missed person” forms distributed to community action groups, asking them to identify persons that they believed may have been missed by the census; (4) postenumeration postal check of address listings in traditional enumeration areas (i.e., not mailout-mailback); and (5) vacancy recheck, a follow-up study of a nationwide sample of 15,000 housing units classified as vacant by enumerators.

  • Analysis of census coverage by local residents: ethnographic study of census experiences and perceived reasons for under-counting in a sample of living quarters that was then being studied (along similar lines) by other Census Bureau demographic survey programs.

  • District of Columbia driver’s license study: records check of a sample of about 1,000 young males (ages 20–29) who had either newly obtained or renewed an existing driver’s license in the District of Columbia between July 1969 and June 1970.

  • Measurement of content error

    • National Edit Sample: a sample of about 15,000 households, distributed across the different types of enumeration areas (mailout-mailback and conventional enumeration). For mailout questionnaire packages in the sample, a special envelope was included so that the questionnaire was mailed directly to the Census Bureau’s main processing center in Jeffersonville, IN; a photocopy of the questionnaire was made before sending it to the appropriate local office, and a copy of the final questionnaire after editing and follow-up steps was obtained for comparison with the mailed

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  • original. During the 1970 census, the National Edit Sample returns—having been funneled directly to Jeffersonville—were used as part of an “early warning” system for judging workload in local and district offices; after the census, the sample was analyzed to study the effects of editing operations.

  • Sample control in mail areas: a study of how the final composition of the roughly 1-in-5 long-form sample compared with the original design of the sample (i.e., whether long- and short-form questionnaires were interchanged in multiunit structures).

  • Quality control of field operations: review of quality control records maintained by both enumerator crew leaders and district office personnel to identify factors in cases in which enumerators or other staff had to be released due to poor work.

  • Geographic coding evaluation: study in which enumerators were sent to verify the block, tract, and place codes automatically assigned by computer (using the Bureau’s new electronic address coding guides) to a sample of about 5,000 census listings in mail areas. A second phase of the study sent a second enumerator to verify the geographic coding obtained by the original census interviewer in nonmail, traditional enumeration areas for about 5,000 listings.

  • Quality of census sampling: review of counts in sample (long-form) and nonsample households by such factors as household size, race, sex, age, and household type.

  • Coding quality: comparison of questionnaire items coded (on to microfilm/computer readable forms) by an independent coder.

  • Place-of-work data: follow-up of the accuracy of reported place-of-work information (the 1970 census requested a full address of the workplace, whereas the 1960 census was less specific) for a sample of about 4,000 persons drawn from matched CPS–census records (described above).

  • Content Reinterview Study: similar to reinterview studies in the 1950 and 1960 censuses, but asking detailed population and housing items of all reinterviewed households. A sample of about 11,000 housing units (10,000 occupied and 1,000 flagged as vacant) from the long-form sample universe was flagged for reinterview. Special attention in the reinterview analysis was paid to new questionnaire items on the 1970 census, such as mother tongue and vocational training. The content reinterview sample was also compared with vital statistics data to assess the accuracy of the last child born and number of children ever born items, and matches were made before and after editing, processing, and imputation procedures in order to study their impact.

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  • Disability study: specific evaluation study requested by the Social Security Administration, reinterviewing about 15,000 households in which at least one person was reported as disabled and about 25,000 households said to contain only nondisabled persons. The reinterview questionnaire contained additional probes and alternative wordings to try to elicit fuller response to the disability questions.

  • Employer record check: similar to the check of reported occupation and industry data against employer records conducted in 1960 but with a larger sample size: about 6,000 persons rather than 2,000.

  • Employment 5 years ago: report of the 1968 Subject Response Study (see Section A–3.a), in which data on employment, occupation, and industry reported by a sample of Current Population Survey interviewees in July 1963 were compared with answers to “occupation 5 years ago” questions on a 1968 test questionnaire.

  • Record check on value of home: revival of a 1950 census reinterview study, wherein the actual sale price for about 3,000 single family homes sold between July–December 1971 was compared with the reported “value of home” question on the long-form sample questionnaire.

  • Record check on gross rent: similar to the record check on home value, the study of gross rent in 1970 extended an idea from the 1950 census. In five selected metropolitan areas, about 1,200 rental-occupied households were drawn from the 1970 census returns; the local gas and electric utility companies provided data on amounts paid for each of the 12 months before Census Day, which were compared with census-reported figures.

  • Assessment of interviewer’s contribution to census errors: estimation of response variance due to enumerators based on a sample of enumeration assignments distributed across “decentralized mail” areas (which, combined, accounted for about one-half of the U.S. population; in the 1970 census, questionnaires in these areas were mailed to local or district offices, where enumerators edited and checked for completeness the returned questionnaires).

  • Assessment of publicity campaign: a sample survey administered to about 600 radio stations and 700 television stations, asking about the extent to which their programming publicized census topics and the number of times that public service spots about the census were broadcast (including the dollar value of those spots, if they had been treated as paid commercials).

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  • Experiment on expanded mailout-mailback enumeration: anticipating that future censuses would expand mailout-mailback coverage to more than the densely populated urban areas targeted in 1970, a sample of 10 district offices in rural areas—paired so that one would be counted by traditional procedures (control) and the other attempted by mailout-mailback (test). Returns from these pairs of offices were then compared by coverage completeness, cost of enumeration, and item nonresponse.

SOURCE: Adapted from U.S. Census Bureau (1976:Chap. 14).

A–4
1980 CENSUS

A–4.a
Principal Pretests and Experiments Conducted Prior to the Census
Tests Conducted as Part of Special Census or Other Survey
  • April 1975 (San Bernardino County, CA): first test of use of computer terminals in local (district) offices for transmission of cost and progress reports and payroll information. The terminals were also used to send block-level population and housing unit counts, in part to facilitate a first-ever review of those counts (specifically, precensus estimates) by local authorities. Although some communication problems were found, similar computer configurations were used in later tests in Pima County, Travis County, Camden, and Oakland (described below). However, the Bureau ultimately decided not to place terminals in the local district offices during the 1980 census due to cost and maintenance issues. The local review of population and housing units was also tested in subsequent experiments, and, in the end, local authorities were given a chance to review counts between the first and second phases of follow-up during the 1980 census.

  • October 1975 (Pima County, AZ): further testing of district office computer terminals and local review of preliminary housing unit and population counts, conducted as part of a special census. The Pima County test also focused on the use of “nonhousehold source” lists—lists of person names collected from local authorities. A list of about 2,700 names and addresses of predominantly Hispanic persons was collected from community organizations and from public assistance programs, and this list was cross-checked with the test census address register. About 6 percent (160) of listed addresses were found to have been missed by the test census; follow-up with these missed persons found an additional 231 persons who had been missed in the census count.

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Pretests of Census Operations and Questionnaires
  • April 1975 (Salem County, NJ): test of the collection of income data on a 100 percent basis (that is, on the short form rather than the long form of the census). The test grew from dissatisfaction with the long-form sample data on income from the 1970 census, particularly for small areas due to wider implementation of revenue-sharing legislation. The Salem County test tried four different versions of an income question, including one simple “total income” question (with categorical response) as well as one modeled on the detailed ledger-type query usually found on the long-form questionnaire.

  • May 1975 (national sample): national mail-only test of the four income questions tested on a local level in Salem County, NJ; the national test included 19,700 housing units, and some nonresponse follow-up was conducted by Census Bureau survey interviewers. The Salem County and national tests suggested the feasibility of including an income question on the short form—particularly a question combining a series of yes/no questions on income types with a categorical “total income” response—but the Bureau decided in 1977 to exclude the question from the 1980 census short form.

  • September 1975 (four counties in Arkansas, two parishes in Louisiana, and three counties in Mississippi): test of three alternative strategies for address prelisting in rural areas: (P1) inquire only when necessary, knocking on doors to gather address information only if observation was unclear; (P2) inquire at every structure, with only one follow-up attempt if information could not be obtained from observation or from neighbors; and (P3) inquire at every structure, with “unlimited” (several) follow-up attempts, using information from neighbors only as a last resort. However, the P3 strategy was only “simulated” by making additional visits to P2 addresses where no homeowner was at home during the initial visit but information had already been determined by observation or from neighbors. The Bureau concluded that P2 should be used in the 1980 census (and it was), finding that it outperformed P1 but that additional gains from further callbacks in P3 were outweighed by additional costs. In a sample of enumeration districts in these areas, a separate quality control crew listed 25 addresses in each district and matched those to address registers collected during the test; a similar quality control measure would be used in the 1980 count.

  • Fall 1975–Winter 1976 (Columbus, OH): test of the quality of address lists obtainable in Tape Address Register (TAR) areas, or mail delivery areas in urban centers where mailing lists could be obtained from commercially produced computer tapes. Such registers had been

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

used in 145 metropolitan statistical areas in 1970 and were proposed for use in all available areas in 1980. The test compared attempts to update the 1970 census address register for Columbus with four different commercial TARs (as well as post office checks and Census Bureau geocoding). The Census Bureau found the TARs to be usable for 1980 but ultimately decided against trying to directly update 1970 files based on the new sources.

  • April 1976 (Travis County, TX): first major pretest of census procedures, running from the opening of a district office in Austin in January through closing of the office in mid-September (about 2 months behind schedule). The Travis County test was intended as a “mini-census” involving both mail and field components. In the test, Census Bureau officials paid particular attention to monitoring the mail return rate to the local census office as a diagnostic tool. The questionnaires in the test census followed the 1970 census closely, save for revised questions on income (following up on the Salem County and national mail income pretests) and Hispanic origin. The Travis County test also began the innovation of providing Spanish-language questionnaires upon request; a Spanish-language message in the main mailing package directed interested respondents to call a telephone number or check a box on the English language form in order to request the Spanish questionnaire. However, take-up on the Spanish questionnaire was very low—only 50 requests out of 15,000 households with a Hispanic-origin householder. The Travis County test also replicated the “nonhousehold source” name and address list approach from the Pima County special census, comparing census coverage with names and addresses from driver’s license files. The test also directly followed up with persons who had given a change-of-address notice to the post office within one month of Census Day.

  • May 1976 (Gallia and Meigs Counties, OH): test of revised methods for designating enumeration districts (essentially, an individual enumerator’s workload area) to ensure that they comply with natural, recognizable land features rather than invisible political boundaries. The test found little difference in results between using the visually recognizable enumeration districts and the block groups carved out of those districts for tabulation purposes. Hence, block groups were used as standard units in 1980.

  • July 1976 (national sample): National Content Test involving mailed questionnaires to about 28,000 housing units divided into two panels. Each panel was administered a questionnaire containing alternative versions of several questions, including relationship, ethnic origin, education, school attendance, place of birth (asking about mother’s place of residence at birth rather than actual birth location), and disabil-

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

ity. About 2,300 households from each panel were chosen for a content reinterview by a Census Bureau survey interviewer in September–October 1976. The disability question, in particular, was found to fare poorly in accuracy as measured by the reinterview, but it was nonetheless retained in the 1980 census due to its policy importance.

  • September 1976 (Camden, NJ): second major “minicensus” test, which focused heavily on new coverage improvement techniques for hard-to-enumerate areas. These techniques included (1) team enumeration, comparing the effectiveness of enumerators working alone, enumerators working in pairs, or blitz-type enumeration by a crew or team (the team methods were found to produce better quality results at the cost of some loss in productivity; in 1980, managers would be authorized to use team methods to resolve hard-to-enumerate areas); (2) Spanish-language forms available upon request, as in Travis County, but with expanded use of walk-in questionnaire assistance centers rather than phone banks; (3) “nonhousehold source” lists from driver’s license rolls, as in Travis County; (4) formation of a complete-count committee of local officials and organizers to promote the census, modeled after a group that had formed in Detroit during the 1970 census; (5) fielding of a short survey to assess awareness of the advertising and public information campaign associated with the test census; and (6) testing of a two-phase local review process to examine preliminary housing unit and population counts.

    The Camden test is perhaps most notable because the city of Camden challenged the pretest results (and the Bureau’s 1975 population estimate) for the city as being erroneously low, occasioning hearings before the U.S. House Subcommittee on Census and Population. The city also filed suit against the Census Bureau, seeking an injunction on use of the sub-100,000 population estimates for the city in federal and state allocation programs. Ultimately, the suit was dismissed in March 1980 by mutual consent of the city and the Census Bureau.

  • September 1976 (three chapters of the Navajo Indian Reservation, Arizona and New Mexico): test of revised procedures for tribal lands, beginning with an enumeration of the three chapters by usual census methods. The results were then compared with the Navajo population register maintained by the Bureau of Indian Affairs (BIA) of the U.S. Department of the Interior; nonmatched cases were submitted for field follow-up. The BIA register was determined to be inadequate for use as a coverage improvement device in the 1980 census. However, the Census Bureau found BIA maps—combined with aerial photography—useful in mapping the areas and assigning enumerator work.

  • January 1977 (four counties in Arkansas, two parishes in Louisiana, and three counties in Mississippi): “rural relist test” in which the same

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

counties where address listing activities were conducted in September 1975 were revisited. The test was intended to provide information to choose between “early” address listing in the 1980 census (listing in spring 1979, followed by two postal checks) and “late” listing (listing in January 1980, with one postal check in March 1980). Late listing was found to have generally better coverage, but the two postal checks in the early listing scheme compensated for the coverage differences.

  • April 1977 (Oakland, CA): third major pretest of major census operations, this one incorporating alternative questionnaire designs for the race and Hispanic-origin questions (in response to an Office of Management and Budget directive that data be provided for four race categories). The relationship question was also revised to ask about each person’s relationship to a reference person (the person completing the form as “Person 1”) rather than to the “head of household.” In the test, the Census Bureau experimented with the use of reminder postcards to urge households to return their form; such cards were mailed to even-numbered enumeration districts. Although the test results implied that the reminder postcards could boost mail response by as much as 5 percent, the Bureau judged that the gains would not outweigh the additional postage cost and declined to use reminder postcards in the 1980 census. Operationally, the test also paid some enumerators using hourly rates rather than “piece rates,” as had been done in 1970.

  • July 1978 (national sample): based on responses to the Hispanic-origin question in the Richmond dress rehearsal (below), the Census Bureau conducted a National Test of Spanish Origin of about 3,200 housing units, by mail, during summer 1978. The test compared different versions of that question, and the question that was judged to have the best performance was slated to be used in the September 1978 lower Manhattan dress rehearsal (below).

Dress Rehearsals
  • April 1978 (Richmond city and Chesterfield and Henrico Counties, VA): on the recommendation of the Civil Service Commission (later the Office of Personnel Management), the Richmond dress rehearsal approached enumerator training from a “performance-oriented” standpoint, emphasizing visual aids and workbooks rather than verbatim recitation. The Richmond rehearsal was also the first test of a dependent household roster check (later used in the 1980 census) in which households returning incomplete questionnaires were recontacted by phone or enumerator visit; the roster of people listed on the original questionnaire was read back to the respondent to deter-

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

mine its accuracy. The questionnaire used in the Richmond rehearsal was also the first census questionnaire to use color printing to enhance readability (blue backgrounds to highlight questions and blue type for cover information and instructions).

  • April 1978 (La Plata and Montezuma Counties, CO; portions of Archuleta County, CO, and San Juan County, NM, were also included to complete coverage of the Ute Mountain and Southern Ute Indian reservations): dress rehearsal featuring the first test of traditional door-to-door enumeration since the 1970 census. Specifically, Advance Census Reports were mailed to households, but respondents were instructed to keep them until an enumerator came to collect them. Because of the Indian reservations included in the dress rehearsal area, a supplementary questionnaire for on-reservation households was added to the test (and later used in the 1980 census). The supplementary questionnaire included questions on tribal affiliation, migration, utilization of government programs, and income. In the test, it was noted that reservation households were displeased if they had to complete both the census long-form questionnaire and the supplementary questions, so the supplement was waived for on-reservation long-form households in 1980.

  • September 1978 (Lower Manhattan, NY): a third dress rehearsal was not part of the Bureau’s original plans, but advisory committees challenged the racial and ethnic diversity of the Richmond and Colorado rehearsal areas (particularly for Hispanic and Asian American populations). In this test, the Census Bureau experienced resistance from mail carriers when a postal check of address lists was performed; the carriers rebelled at having to complete a separate postcard for each unit in large multiunit structures that had been omitted from the census address register (as a result of these criticisms, future postal checks permitted only one “add” postcard to be completed for all the units of a completely missed structure/address). Difficulties in recruiting temporary enumerators prompted the Census Bureau to seek a waiver of the requirement that census workers be U.S. citizens (a waiver that would later hold for the 1980 census).

SOURCE: Adapted from U.S. Census Bureau (1989:Chap. 2).

A–4.b
Research, Experimentation, and Evaluation Program

For the 1980 and subsequent censuses, the descriptions of research and experimentation programs that follow do not include those studies related directly to coverage evaluation through a postenumeration survey. In all, the Census Bureau’s program of evaluations and experiments included about 40 separate projects.

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  • Coverage evaluation

    • Housing unit coverage studies: program to estimate missed housing units based on a match of Current Population Survey and census returns as well as a follow-up interview with a sample of households already counted in the census (i.e., a subsample of the “E” or enumeration sample to which the postenumeration survey is compared). The studies also estimated the rate at which whole housing units containing at least one household member duplicated elsewhere were themselves duplicated in the census (i.e., through clerical or geographic coding error).

    • CPS–Census retrospective study: reverse record check study in which one rotation panel from the March 1977 CPS (about 20,000 people) was matched to the 1980 census returns, with the intent of determining how often people could not be either directly matched or successfully contacted (traced) to verify their address on Census Day 1980. Census clerks were permitted to examine 1979 Internal Revenue Service return data to try to find a different address if a sample person could not be found at their 1977 address. The study was conducted between 1982 and August 1983 (potentially putting further distance between people and their 1977 or 1980 addresses).

    • CPS–Internal Revenue Service (IRS) administrative records match: as part of initial research into the feasibility of triple-system estimation, about 92,000 records from the February 1978 CPS were matched to an IRS data file extract based on Social Security number (SSN). Records were also sent to the Social Security Administration for matching to summary earnings records, either by SSN (for those CPS records that included SSN) or based on name and other information, and match rates across the three data sources were estimated and compared.

    • IRS–Census direct match: the CPS–IRS records match suggested a surprisingly high erroneous match rate: cases matched based on SSN were found not to match on name. This result prompted a further study in which a sample of census records was matched to the IRS individual master file.

  • Coverage improvement evaluations (in rough chronological order of the operations they describe; the growth in the number of these operations between 1970 and 1980 is discussed in Chapter 2, and the individual operations are further described in National Research Council, 1985):

    • Advance post office check: evaluation of a complete review by the U.S. Postal Service of a compiled set of commercial address

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  • lists (about 38 million addresses). The Postal Service suggested adding 5 million addresses and deleting or modifying 2.9 million others. A match of the Postal Service–suggested additions to the Census Bureau’s internal address register yielded a net of 2.2 million additions. (A small sample of about 4,100 addresses from the commercial lists was deliberately withheld, for later matching to the Postal Service’s returned list; about two-thirds of the withheld addresses had been added by the postal check.) The operation cost about $7 million, about $5 million of which was paid to the Postal Service.

  • Casing and time-of-delivery post office checks: evaluation of problems (e.g., missing addresses or known-undeliverable addresses) detected by local postal staff as they “cased” (sorted in delivery order) the census questionnaire mailing packages. Further reports were obtained from the local mail carriers as they attempted delivery. Together, these two checks resulted in the identification and enumeration of another 2 million housing units; the two operations cost about $9.3 million, about $6 million of which was paid to the Postal Service.

  • Precanvass: evaluation of an operation in which enumerators canvassed areas carrying excerpts from the census address register (updated from the advance post office check) to verify or correct entries and add additional found housing units. The precanvass was conducted on a 100 percent basis in the urban areas where the commercial address list–based register was to be used; after the census, precanvass registers were compared with final census registers and census data for a sample of enumeration districts in order to study the characteristics of added housing units. The Census Bureau estimated that the precanvass added 2.36 million addresses to the census, at a cost of about $12 million.

  • Casual count: evaluation of coverage improvement operation through which teams of two enumerators were sent to places that people with no permanent place of residence were expected to frequent: transit stations, unemployment offices, bars, and so forth. The enumerators attempted interviews to collect information and determine whether an interviewee might have been counted elsewhere. Ultimately, the procedure was found to have added only about 13,000 people (at a cost of about $250,000).

  • Census questionnaire coverage items and dependent roster checks: field operation to resolve disparities in reported counts: i.e., mismatches between the reported number of persons in the household and the number of names listed in response to Question 1 on the questionnaire or between the reported and

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  • expected (from the address register) number of living quarters at the address. In the follow-up interviews, respondents were allowed and asked to review the names on the originally reported roster to make additions or deletions. The operation was conducted in 260 enumeration districts (systematic 1-in-1,000 sample of districts). The Census Bureau estimated that the living quarters check—comparing the reported number of living quarters at the address to the address register and following up on discrepancies—ultimately added about 93,000 housing units to the census, concentrated in 30 of the 260 sample enumeration districts.

  • Whole household “usual home elsewhere” identification: the 1980 census questionnaire included a question after the main household roster asking whether everyone listed in the roster was only staying at the Census Day location temporarily and had a usual home elsewhere. Home addresses could be entered on the back cover of the questionnaire. The information was transcribed onto a new form and routed through the appropriate district office for the “usual home” location. Subsequently, the Bureau mounted a clerical check operation to verify whether these “usual home elsewhere” cases had in fact been counted at the usual place of residence. About 547,000 people in 301,000 housing units fell into the whole household “usual home elsewhere” category; the clerical check and further evaluation suggested that a total of about 1 million people were reallocated through this operation and about 200,000 were counted in at least two places (because their listings at temporary addresses had not been deleted).

  • Nonhousehold sources program: similar to operations in previous tests, the Census Bureau compiled independent lists of persons (particularly minority populations) that sources believed might be missed in the census. In particular, the 1980 version of the program used driver’s license records, U.S. Immigration and Naturalization Service records, and 1979 New York City public assistance files as sources of possible missed persons. The actual yield of persons added to the census—about 1.9 percent of the compiled lists—fell well short of the 10 percent that pretests had suggested; however, about 58,000 persons were eventually determined to be cases that should have been added to the census but were not due to processing time constraints. The public assistance and Immigration and Naturalization Service files were judged to yield about twice as many adds per follow-up case as the motor vehicle files.

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  • Follow-up of vacant and deleted housing units: evaluation of procedure for visiting every housing unit flagged as “vacant” or “deleted (as nonexistent)” by census enumerators. About 5.5 million vacant and 2.3 million deleted units were covered by this operation; about 10 percent and 7.5 percent of the vacant and deleted units, respectively, were found to be classified incorrectly and converted to occupied units. At $36.3 million, the vacant and delete check was the most costly of the coverage improvement programs in the 1980 census.

  • Prelist recanvass: evaluation of the completeness of the address list in prelist areas (those for which the address list was principally generated and verified prior to the census by enumerator visit). In 137 district offices, areas were selected to be recanvassed and listed; these recanvass registers were then compared with the census master address register to determine whether any housing units could be added or deleted. In the sampled areas, about 105,000 housing units containing about 217,000 persons were added to the census due to the recanvass; the operation cost $10.29 million.

  • Assistance centers: assessment of the level of usage of walk-in questionnaire assistance centers in 87 centralized district offices and telephone assistance for all 373 district offices. Evaluation was incomplete because necessary records were not retained; however, the evaluation suggested that about 790,000 contacts had been made for questionnaire assistance, most commonly on residence questions (who to list on the roster) and income reporting.

  • Spanish-language questionnaire: evaluation of the usage of the Spanish-language questionnaire that could be requested in mailout-mailback areas (either by a check box with Spanish instructions on the main English-language questionnaire or through contact with a census assistance center or nonresponse follow-up interviewer) or from the enumerator in nonmail areas. Necessary data for this evaluation were incomplete: enumerators were not asked to keep a record of the number of Spanish-language questionnaires they distributed on their rounds.

  • “Were You Counted?” program: evaluation of a program through which people who believed themselves to be missed by the census could fill a blank questionnaire and return it to the Census Bureau. A blank “Were You Counted?” questionnaire was given to urban newspapers (including some with questions translated into various foreign languages) for publication as a public service; the form could be clipped and mailed for processing.

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  • About 62,000 forms were received nationwide, reporting data for about 140,000 people; about one-quarter of these were found to have been already enumerated and about half were added to the census (the remaining quarter were unusable, including those that provided an unlocatable address).

  • Postenumeration post office check: evaluation of a postcensus check in which local Postal Service staff were asked to review the addresses in which conventional door-to-door, list-enumerate methodology had been used. This review suggested another 148,000 housing units that might have been missed by the census; ultimately, the Census Bureau estimated that this operation added about 50,000 housing units and 130,000 people to the final census counts.

  • Local review: evaluation of a procedure in which preliminary population and housing unit counts were generated down to the enumeration district level, after completion of nonresponse follow-up, and given to local officials to review. The officials were asked to identify and provide evidence of any major discrepancies between the counts and their local records and estimates. About one-third of 39,000 contacted governmental units opted to participate in local review, and about one-half of those flagged at least one discrepancy. About 20,000 discrepant cases were able to be resolved without resort to recanvass in the field; recanvass work corrected geographic coding for about 28,000 housing units and added about 53,000 housing units to the census.

  • Content evaluation

    • Content reinterview study: reinterview of about 14,000 housing units that had been included in the long-form sample, using a questionnaire focused on questions that were new or substantially revised for the 1980 census (e.g., school of attendance, non-English language spoken and ability to speak English, ancestry). Additional probe questions or alternative question language was added for several of the items. Interviews were conducted by members of the Census Bureau’s CPS staff (permanent interviewers) between November 1980 and January 1981; proxy reporting was allowed for young people (age 15 or less), with proxy interviewers for persons over age 16 allowed only as a last resort. About 88 percent of the sample was successfully reinterviewed. Questions were evaluated for their response variability and for unusual patterns that might suggest reporting problems (i.e., finding that Idaho was commonly reported as the place of birth in the

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  • census when the reinterview indicated either Illinois or Indiana, suggesting a coding or handwriting interpretation problem).

  • Analysis of matched census–content reinterview study records: two separate analysis during the 1980s of the data (including results of alternative question wordings) from the content reinterview study. A 1983 study focused on education-related questions (e.g., highest grade attained, degrees attained, present enrollment); a 1986 study examined responses to the place-of-birth question in more detail.

  • Utility cost follow-up surveys: comparison of reported data on utility costs in the 1980 census (asking about the previous 12 months) with actual cost data obtained from 11 utility companies in 8 cities in December 1979. The participating companies sent a notice of each customer’s average monthly utility costs to half of their residential customers along with their March 1980 bills (thus providing them with an accurate answer to the census question); the other half of the customers served as a control group. The companies then submitted lists of customer names, addresses, average monthly utility cost, and sample/control group status to the Census Bureau for matching to census returns. The data from the experiment were analyzed in 1982, leading to the finding of general overreporting of utility costs (more so for gas than electricity); the advance notification (sample group) reduced the size of the overreporting but did not eliminate it.

  • Processing and quality control evaluations:

    • Evaluation of qualification tests for coders: prospective employees for coding operations (matching respondents’ reported information for industry and occupation or for place-of-work/migration) were put through multistage qualification tests, using decks of census questionnaires filled with artificial data. The evaluation studied performance at the different stages (i.e., improvement in accuracy on a second, more difficult deck of test cases conditional on results of the first deck).

    • Coding accuracy: quality control checks were performed for coding of items onto machine-readable forms, differentiating between the particularly detailed coding for industry and occupation and place-of-work/migration and “general” coding of all other items (e.g., ancestry, income). For income-item coding, a special analysis considered the frequency of “factor of 10” errors: inadvertent shifting of a decimal point to the left or right.

    • Geographic Base File/Dual Independent Map Encoding (GBF/DIME) file closeout evaluation: to review the accuracy

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

of the GBF/DIME files used to assign geographic identifiers (e.g., ZIP code, census tract, and city/place) to addresses, a stratified sample of 600–800 addresses from each of the 280 GBF/DIME files was drawn. Cards were generated for each sample address and sent to the appropriate regional census offices; the geographic codes in census files were compared with the GBF/DIME results by a geographic specialist in each regional office.

  • Other studies

    • Census Logistical Early Warning Sample (CLEWS): similar to the National Edit Sample of the 1970 census research program (see Section A–3.b), the CLEWS was a national sample of 6,000 households (evenly divided between short-form and long-form-sample households) for which the mail response envelope was specially marked (“CLEWS”) and addressed for direct return to the Census Bureau’s processing center in Jeffersonville, IN. A copy of the questionnaire was made and retained in Jeffersonville before forwarding to the appropriate district office; the CLEWS was used to provide quick estimates of daily mail-return rates and detect possible enumeration problems. The CLEWS questionnaires were also used as the control group for the alternative questionnaire experiment described below.

    • Imputation, allocation, and substitution: Census Bureau staff developed evaluation studies for the three basic methods for assigning data for unreported items (for housing, persons, or both). In support of the evaluation, census field staff undertook a verification check of about 11,000 units in 12 areas with high rates of “unclassified” housing units (those lacking information on household size or clear indication of vacancy status). Part of the evaluation also considered the extent to which the data used to fill gaps for particular housing units wound up being drawn from an adjacent unit—that is, the housing unit immediately preceding the “gap” unit on the Census Bureau’s record tapes.

    • Public information evaluation:

      • Advertising media evaluations: audits commissioned by the Advertising Council to estimate the reach and frequency of the advertising materials (distributed to media outlets for dissemination on a nonpaid, public-service basis) for the 1980 census. In particular, the auditors were asked to estimate the value of the broadcast or publication of the advertising materials if they had been run on a paid basis. Separate studies were arranged to study the dissemination of the materials among predominantly black and Hispanic audiences.

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  • Knowledge, attitudes, and practices survey: special surveys prior to the census (late January–early February 1980) and after the census (late March) asking six questions about the census, the confidentiality of census data, and the uses of census data for apportionment and redistricting. The study was intended to assess the degree to which the public-service advertising materials increased awareness of basic census concepts. The precensus survey reached about 2,431 housing units (out of 3,772 eligible for the sample), and the post-census survey included returns from 2,446 interviews (out of 3,115 eligible).

  • Applied Behavior Analysis Study: experiment in which permanent Census Bureau survey enumerators were sent to visit a sample of about 11,000 housing units (clustered in 20 district offices) in the short window between Census Day and the start of nonresponse follow-up activities. Respondents were asked to describe how they had or were participating in the census (e.g., acknowledging receiving the form in the mail, starting to fill in the form, mailing in the form). The study was intended to study factors differentiating those households that had already mailed in the form from those who merely acknowledged receiving it. Later, the interview records were matched to final census records to study whether people who reported not receiving the form had in fact been counted and whether households reporting that they had mailed the form were actually counted in the census on an enumerator return (nonresponse follow-up).

  • Experiments

    • Alternative questionnaire experiment: test of two sets (both short and long form) of questionnaires. One set was designed to be machine readable and presented the short-form questions in horizontal rows rather than vertical columns; long-form content was slightly rearranged. The second set was not machine readable by the Bureau’s then-current processing system but emphasized visual appeal and respondent comprehension. Although the experimental groups and control group were intended to include about 18,000 addresses, only 14,400 returns were usable due to packaging and delivery problems.

    • Telephone nonresponse follow-up experiment: experiment to test the feasibility of nonresponse follow-up by telephone in mail census areas and to compare the cost of telephone and personal visit methods. The study was limited to single-unit structures because the “crisscross” telephone directories used to ob-

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  • tain telephone numbers could not provide apartment or subunit designations; moreover, the study did not involve northeast or south census regions because crisscross directories were not available. From the workloads of selected district offices, a total of 1,000 nonresponse cases were processed in each of four treatment groups: short-form telephone, short-form personal visit, long-form telephone, and long-form personal visit.

  • Update List-Leave experiment: test of an alternative type of enumeration, conducted simultaneously with the main census effort between March 11 and March 26, 1980. Enumerators were tasked to canvass an enumeration district, simultaneously comparing and updating address register entries and leaving a short- or long-form questionnaire for respondents to complete and mail back. Any group quarters encountered during these canvasses were reported to the enumeration crew leader and handled separately. Five district offices (Dayton, OH; northeast central Chicago, IL; Yakima, WA; Greenville, NC; and Abilene, TX) were chosen for the experiment and paired with control offices in nearby areas.

  • Studies of employee selection

    • Selection aids validation study: program of testing, both as part of the Oakland and lower Manhattan census tests and through in-house study, of selection procedures for temporary census employees. In particular, the tests administered to prospective temporary census employees (nonsupervisory and supervisory) were analyzed for fairness to racial, ethnic, and sex groups.

    • Adverse-impact determination study: review of the disposition of about 62,000 applicants from a sample of district offices in order to determine whether employment rates by demographic subgroup were consistent with civil rights law (i.e., that no subgroup faced an “adverse impact” as defined in the Civil Rights Act of 1964, as amended).

    • Predictive validity project: further assessment of the test administered to prospective 1980 census enumerators, this time to study the test’s predictive power for enumerator longevity (number of weeks on the job and completion of task) and productivity (e.g., time spent per acceptably completed census form).

  • Evaluation of training methods

    • Experimental student internship program: feasibility study of using college students as census enumerators in return for academic credit (as determined by the school) and pay

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

during their 6-week appointment. A total of 46 campuses participated in the experiment; in all, about 1,500 students, faculty members, and Census Bureau staff were involved in the test; about 30 participants traveled to Washington, DC, for a postcensus evaluation workshop in November 1980. Although the interns tended to complete their assignments at a higher rate than the general enumerator staff, the Census Bureau concluded that they were not as productive as the enumerators and were not available for the expected 30-hours-per-week detail.

  • Alternative training experiment: comparison of the default 1980 census training routine (verbatim reading of a training guide by a designated trainer) with alternative training materials (i.e., checklists, flow charts) modeled after those used in industry and the armed forces. Three matched pairs of district offices were chosen for the experiment; in all, about 1,400 received training in the control group (verbatim) and 1,200 in the experimental group (alternative method). The alternative techniques (particularly group learning activities such as role playing) were ultimately found to result in better job performance.

  • Job-enrichment training experiment: feasibility study conducted in one of three district offices in Dallas, TX, in which about 70 out of 150 newly hired temporary enumerators accepted an offer to represent the Census Bureau at local community meetings. Those who did so (and, hence, publicly declared their Census Bureau affiliation) were more likely to stay on the job through completion of nonresponse follow-up than those who did not, controlling for other factors.

SOURCE: Adapted from U.S. Census Bureau (1989:Chap. 9).

A–5
1990 CENSUS

A–5.a
Principal Pretests and Experiments Conducted Prior to the Census
Pretests of Census Operations and Questionnaires
  • July–August 1983 (Essex County, MA): prior to the widespread availability of geographic positioning systems (GPS) technology, the Census Bureau conducted tests using the U.S. Coast Guard’s Long Range Navigational System (LORAN-C) to record the geographic location of rural residences during the 1990 census. Like GPS, LORAN-C used low frequency radio signals to derive geographic locations; however, it had

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

the key limitation that its coverage was largely limited to coastal areas in the continental United States. In the early 1980s, the question was whether it was feasible to use a LORAN-C receiver unit in the field—particularly when, at the time, the estimated cost of a census-ready LORAN-C receiver was as high as $1,000 per unit. Early research by the Census Bureau in 1983 suggested that a custom-built solution would be necessary because existing off-the-shelf LORAN-C receivers did not meet the Bureau’s specifications. Tests in Massachusetts in summer 1983 examined the ability to obtain coordinates from a car-based receiver. Based on poor results, the Census Bureau began to examine the then-developing GPS network, but the full set of GPS satellites was not expected to be in place until 1987–1989. In May 1984, the Bureau decided to suspend further direct research on navigation systems in the 1990 census.

  • 1984 (Bridgeport and Hartford, CT; Hardin County, TX; and Gordon and Murray Counties, GA): multiple-site Address List Compilation Test in both urban (Connecticut) and rural (Texas and Georgia) areas to assess the quality of various address list development technologies. In the Connecticut portion of the test, three basic address list sources were used and compared—commercial vendor mailing lists, U.S. Postal Service lists, and the 1980 census address register updated through new construction permits and other sources. In Hartford, all three lists were obtained and subjected to a “precanvass” updating by Census Bureau field staff; in Bridgeport, the Postal Service lists were not used, and the commercial and 1980-updated lists were put through a mail carrier check and the census field precanvass. In the rural areas, the basic strategies compared were (1) a complete prelist canvass by Census Bureau field staff updated and corrected through a postal carrier check and (2) a Postal Service–generated list updated by census field staff. Based on the test, the Census Bureau decided to use commercial vendor lists for urban areas (using a postal carrier check and a census precanvass operation) and a complete prelisting canvass in rural areas (with a postal check).

  • March 1985 (Jersey City, NJ, and Tampa, FL): first major operational test, intended to focus primarily on increased data collection automation that had been developed since the 1980 census and on a two-stage collection strategy in hard-to-enumerate areas. In the Tampa test, the primary focus was on the use of optical mark recognition in data processing (with the computer-scanned results being compared with traditional keying of questionnaires in Jersey City); it also focused on other attempts at adding automation to census processes and the effectiveness of a reminder postcard. The Jersey City test emphasized a two-stage collection method that separated out the collection

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

and nonresponse follow-up operations for the short-form and long-form data. In the first stage, short forms were mailed to all addresses (with an enumerator visit to nonresponding households), and long-form questionnaires were sent to a 1-in-5 sample in the second stage (with enumerator follow-up). (In all but a small sample, the short-form questions were asked again in the long-form interviews; in the sample, the only short-form item collected was respondent’s name.) The goal was to see whether this operational switch would have benefit in traditionally hard-to-count areas. However, in June 1985, the Census Bureau opted to cancel the nonresponse follow-up for the long-form, second stage collection due to extremely low mail response rates in the two-stage test areas (and, accordingly, a greater nonresponse follow-up workload involving more costly long-form interviews).

  • June 1985 (Chicago, IL): informal, special test of race and Hispanic-origin questions as a preview for the National Content Test in 1986 (see below). The test considered two short-form designs (one of them the 1980 census model as a control). The principal variation in the questions was the addition of a general “Asian or Pacific Islander” category and a “Yes, Spanish/Hispanic” choice—permitting write-in of specific ethnic origins, such as Mexican or Honduran—to the Hispanic-origin question. Enumerator follow-up was conducted on a limited basis, using experienced Census Bureau survey interviewers; the intent of this follow-up was less to produce estimates than to gather personal observations from the interviewers about respondents’ processing of the questions.

  • April 1986 (national sample): National Content Test of 46,000 housing units testing new question wording and formatting, particularly for the race and Hispanic-origin questions. The alternative questionnaires in this test also included a multiple residence coverage probe question (“Does this person regularly live at another residence for 30 or more days during the year?”) and short-form housing questions including year built, acreage and use, value, and year built. The test also varied the mailing package itself—one envelope designed to be attractive to the eye and the other meant to appear more official. Mailout of test questionnaires was delayed by two weeks because the initial shape and thickness of the mailing packages caused automated assembly hardware to malfunction (hence requiring questionnaires to be inserted by hand into envelopes). Field nonresponse follow-up was conducted on a 25 percent sample basis; a further subsample of these cases was contacted yet again to evaluate consistency of responses. Based on the test, the Bureau decided to move questions on marital history and age at first marriage from the short form to the long form, also adding military service, disability, and some housing items to the long form.

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  • March 1986 (central Los Angeles County, CA; eight counties in east central Mississippi, including the Choctaw Indian Reservation): second major operational pretest of census procedures. In Los Angeles, the test attempted to provide information on the use of a local processing office separate from the local census (data collection) office and the effectiveness of the Census Bureau’s planned coverage measurement and statistical adjustment procedures. In Mississippi, the major foci of the test were address list development and questionnaire delivery techniques for rural areas and specific enumeration procedures for American Indian reservations. In particular, the Mississippi test included the first use of update-list-leave enumeration in which enumerators visited areas and simultaneously updated address listings (and map locations) and delivered questionnaires. Favorable results led to the method being repeated in the 1988 dress rehearsal (and the 1990 census). The Mississippi test also included the Census Bureau’s first use of laptop computers for address listing; despite good results, the Census Bureau decided against using portable computers in the 1990 census due to the cost of the equipment. In late March, the Census Bureau canceled census test activities in the Compton office, one of the two district data collection offices in the Los Angeles test area, due to abnormally low mail response.

  • April 1987 (north central North Dakota, including the Fort Totten and Turtle Mountain Indian Reservations): third major test census, in an area of about 75,000 population. In addition to continuing to refine enumeration procedures for Indian reservations, the test also sought to determine whether mailout-mailback methods could be used in “prelist pockets” of mail delivery areas embedded in areas that the Bureau would typically count through door-to-door enumerator visits. The test also sought to make the enumerator schedule/questionnaire attractive and easy to follow (whereas previous questionnaire design had been focused on the main mailout questionnaire). The North Dakota test also continued to vary and test different strategies for enumerator pay, including bonus pay for enumerators and crew leaders.

  • June 1987 (six metropolitan statistical areas—Los Angeles–Long Beach, San Francisco–Oakland, and San Diego, CA; Houston, TX; New York, NY; Miami, FL): further testing of race and Hispanic-origin questions, conducted by mailout to about 27,000 housing units in six metropolitan areas known to contain significant concentrations of Asian or Pacific Islander, Cuban, Mexican, or Puerto Rican populations. This mailout test was dubbed the Special Urban Survey.

  • September–October 1987 (Honolulu, HI; Anchorage and Bethel, AK; El Paso and San Antonio, TX; Charleston, WV): following the Special Urban Survey, the Census Bureau convened focus group interviews

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

in six cities across the country to assess comprehension of the race and Hispanic-origin questions for selected demographic groups. In Honolulu, separate focus groups of Asians and Pacific Islanders were held; the Alaska focus groups included Eskimos, Aleuts, and Alaska Natives; the Texas focus groups focused on Hispanic populations; and the West Virginia site included a mix of blacks and whites. Each focus group included 8–10 participants.

  • October 1989 (targeted sample): special survey conducted as a result of concern among the Asian and Pacific Islander communities, and interest from Congress, in improving the quality of data for those groups. Based on this pressure, the Census Bureau decided to add specific examples of Asian and Pacific Islander categories to the 1990 census form and to permit write-in entries for specific categories; the special survey was a test of this last-minute change. The survey was administered to a sample of about 40,000 housing units mainly targeted to areas with high levels of American Indian and Asian and Pacific Islander response in 1980 (Philadelphia, Chicago, Detroit, New York, and San Diego) as well as a sample from rural areas in West Virginia and Mississippi.

Dress Rehearsal
  • March 1988 (St. Louis, MO; 14 counties in east central Missouri; 8 counties in eastern Washington, including the Colville and Spokane Indian Reservations): dress rehearsals in all three sites provided the first opportunity to use the Census Bureau’s new TIGER database to produce maps and for enumerators to provide TIGER updates. The rehearsal also featured revised training procedures for operations requiring door-to-door visits, emphasizing role-playing simulations in addition to verbatim recitation. The S-Night operation tested in the dress rehearsal was a first, multiple-step procedure to try to include segments of the homeless population in the census count.

    The rehearsal was also the first full-scale systems test for the computers and systems developed for use in district offices, processing centers, and headquarters; delays and bid protests in the solicitation for development of these systems meant that the contract had not been awarded until May 1987. In testing during the rehearsal, census staff found problems with the generation of automated management information reports as well as sluggish response times in the local census office computers when multiple operations were performed simultaneously. Concerns that the systems might not have the memory needed for full-scale use in the 1990 census led to a crash software development program to resolve problems. The automation of census processing did enable the Bureau to test a “search/match” coverage

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

improvement operation, trying to match persons reported on some census forms (group quarters residents indicating a “usual home elsewhere,” military census reports filled by armed forces personnel, and blank “Were You Counted?” forms available from public places) to a usual home of residence.

In the east central Missouri site, nine counties that were originally intended to be enumerated through mailout-mailback yielded large numbers of undeliverable addresses in an advance postal check of the address register; these were converted to update-list-leave areas where enumerators visited households (updating address entries as necessary) and left questionnaires for mailback. The St. Louis portion of the rehearsal introduced a variant of update-list-leave, called urban update-leave, in hard-to-enumerate urban areas, testing the procedure’s use in public housing areas of the city.

SOURCE: Adapted from U.S. Census Bureau (1993:Chap. 2).

A–5.b
Research, Experimentation, and Evaluation Program

Formally known as the 1990 Census Research, Evaluation, and Experimental (REX) program, the Census Bureau’s slate of evaluations and experiments centered around 17 studies (including work specific to the Post-Enumeration Survey and possible adjustment of census figures for nonresponse). Modifications to and extensions of the original 17 studies were proposed and approved by a cross-division steering committee as late as August 1990.

  • Data content and quality

    • Alternative Questionnaires Experiment: test of five different questionnaires (each administered to about 7,000 households) varying question wording and sequence; the alternative questionnaires also varied as to whether they included a “motivational insert” to explain the purpose of the census and try to boost mail return rates. The experiment included a control group of 6,000 households that received the standard census questionnaire.

    • Master Trace Study: a study planned as one of the original 17 REX programs but one that was ultimately abandoned “due to operational difficulties and budget constraints” (U.S. Census Bureau, 1996:11-15). As recommended by the National Research Council (1988), the Census Bureau planned to trace a national sample of 31,000 questionnaires through all stages of operations. In addition to computer files, physical copies of forms and documents were to be made and retained before and after operations that were not computer automated. By design, all households

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  • chosen for the content reinterview study (described below) were to be included in the Master Trace Study sample.

  • Content reinterview study: reinterview of about 13,000 households in September–December 1990, to assess response variability of selected census questions. The reinterviews were conducted principally by telephone rather than field enumerator visit, with sample households alerted of the impending call by a letter.

  • Census–Residential Finance Survey match: evaluation of consistency of responses to relevant census items (i.e., income) among households visited by the Census Bureau’s separate Residential Finance Survey.

  • Macro-level consistency check: a planned study that was eventually abandoned due to budgetary and time constraints, the macro-level consistency check was intended to compare 1990 census data (disaggregated by demographic characteristics) with non–Census Bureau sources. These external sources may have included the Current Population Survey as well as administrative data from states and from the federal Medicare program.

  • Coding evaluation: quality assurance operation making use of more experienced coding personnel to ensure that selected write-in responses were keyed and categorized correctly.

  • Integrated evaluation of error: evaluation study intended to develop a “total error model” for 1990 census data, breaking down census error by demographic characteristics and documenting the contributions of various census operations to overall error.

  • Coverage (not including studies directly related to the 1990 Post-Enumeration Survey)

    • Ethnographic evaluation of behavioral causes of undercount: as recommended by the National Research Council (1978, 1985), the Census Bureau commissioned an extensive suite of observational studies by outside researchers, chronicling responses and attitudes toward the census by numerous traditionally hard-to-enumerate populations. Joint statistical agreement contracts were executed with researchers to study 29 selected areas in the continental United States and Puerto Rico. The ethnographers were also asked to conduct their own “alternative enumerations” of their sample areas (using their own methodology) for comparison with the official census counts. Areas (and demographic populations) covered by the ethnographic studies included inner-city areas with high crime rates or major migration shifts, as well as studies of populations on American Indian reservations and migrant agricultural workers in farm communities.

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  • Assessments of coverage improvement measures: analysis of evaluation data for each of more than 20 address list building or coverage improvement operations. Studies requiring additional data collection or that focused on new or significantly revised operations include:

    • Address list development: evaluation measures for address list development followed the 1980 census model closely. A systematic sample of addresses was withheld from the address list submitted to the U.S. Postal Service in the initial Advance Post Office Check operation; Census Bureau field staff were also dispatched to visit a sample of addresses flagged by the local post offices or individual mail carriers in time-of-delivery postal checks. The 1990 program allowed local officials to review block-level housing unit counts both prior to and following the main 1990 count. Precensus local review was opened only to jurisdictions in mailout-mailback areas (with about 16 percent of eligible jurisdictions opting to participate and 84 percent of those challenging their precensus block counts). About 25 percent of all functioning governmental units participated in the postcensal local review; about 67 percent of those issued challenges. The precensus local review was judged to be particularly effective, with recanvass of challenged blocks ultimately yielding 367,313 additional housing units to the census roster.

    • Urban update-leave: following the procedure’s use in the St. Louis portion of the 1988 dress rehearsal (see Section A–5.a), urban update-leave was conducted in 346 blocks in Chicago, Detroit, Los Angeles, Baltimore, Cleveland, and Philadelphia. In the operation, enumerators were asked to simultaneously verify (or correct) the address register and leave census questionnaires for later mail return. The operation was intended to focus on blocks comprised principally of large public housing developments; however, evaluation suggested that a higher-than-expected 20.7 percent of the units covered in urban update-leave areas were single unit structures.

    • Urban Update-Enumerate: similar to urban update-leave, but focusing on entire census blocks preidentified as containing mainly boarded-up units; the intent was to avoid including those units and structures in the larger vacant/delete check. Conducted in only 96 blocks in New York City and Detroit, the operation proved difficult because regional census offices had difficulty identifying blocks meeting the “boarded up” criterion (e.g., identifying blocks that were expected to be

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

fully vacant due to large construction projects but that turned out to be occupied due to construction delays).

  • Shelter and Street Night (S-Night): to evaluate the intensive S-Night operation intended to cover parts of the homeless population as well as users of emergency and temporary assistance shelters, external researchers placed teams of 60 observers in areas expected to be reached by street enumerators. S-Night was conducted on March 20–21, 1990; the evaluation teams were placed in samples of sites in Chicago, Los Angeles, New Orleans, New York City (with 120 observers), and Phoenix. A separate piece of the evaluation studied the quality of the list of shelters that was developed to conduct S-Night operations.

  • Vacant/Delete/Movers Check: the vacant/delete check procedure used in the 1980 census (see Section A–4.b) was expanded to try to identify (and correctly assign geographic locations) to people who moved residences on or shortly after Census Day.

  • Primary selection algorithm review: the Census Bureau’s primary selection algorithm is an automated routine for choosing the “best” questionnaire in instances in which data are obtained on multiple forms for units with the same identification number. The review operation scrutinized the questionnaires that were not identified as the “best” form in order to find any people who might have been missed in the final count. The clerical review ultimately yielded 350,448 people to the census, on a review of 401,174 multiple-questionnaire cases.

  • Search/Match: expanding the whole-household “usual home elsewhere” search program from the 1980 census (see Section A–4.b), the 1990 incarnation tried to reconcile “usual home elsewhere” address information reported by residents of group quarters (e.g., college dormitories) and the individual report filled by military and shipboard service members. The operation also included searches from the parolee and probationer program (described below) and the “Were You Counted?” program.

  • Parolee/Probationer Program: the 1990 census added a special program for distributing “information record” forms to parole and probation officers, which in turn were to be distributed to each of their assignees. Each parolee or probationer was asked for their Census Day address as well as some basic demographic information. Due to lower-than-

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

expected response, the program was augmented in target counties expected to have large parolee or probationer populations; Census Bureau field staff directly contacted corrections departments to obtain administrative data, as available, on names and Census Day addresses.

  • Coverage improvement techniques study: evaluation that attempted to quantify the coverage yields, financial costs, and errors associated with the numerous coverage improvement operations in the 1990 census.

  • Coverage sampling research: experiment to estimate the feasibility of two options for reducing coverage error in the census: telephone reinterview of a sample of mailout-mailback households and reinterview (using more detailed household rostering questions) of a sample of nonresponse follow-up households. Samples were drawn in 52 urban mailout-mailback district offices (originally, the experiment was supposed to draw from 103 offices, but 51 were excluded due to their inclusion in other experimental studies or at the request of the Census Bureau’s Field Division).

  • Evaluation of outreach and publicity: set of studies to try to assess the effectiveness of the 1990 census outreach and advertising efforts. As in 1980, this included interviews with samples of households both prior to and immediately following Census Day; it also included structured focus-group discussions with representatives of community organizations and selected demographic subgroups.

SOURCE: Adapted from U.S. Census Bureau (1996:Chap. 11).

A–6
2000 CENSUS

A–6.a
Principal Pretests and Experiments Conducted Prior to the Census
Pretests of Census Operations and Questionnaires
  • 1992 (national sample): Simplified Questionnaire Test (SQT) assessed the effects on mail return rates of form length and making the form more user-friendly; it used a prenotice letter, a reminder postcard (used in 1990), and a replacement questionnaire in all treatments. Five different short forms were tested: the booklet form used in 1990, which had been designed to be read by FOSDIC, as a control, and four user-friendly forms, including a booklet form, which contained all of the 1990 content; a micro form, which contained no housing items and asked only for name, age, sex, race, and ethnicity; the micro form with a request for Social Security number added; and a roster form, which asked only for name and date of birth. Both the micro

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

and roster forms achieved higher return rates than the booklet form, especially in easy-to-enumerate areas; the user-friendly booklet form achieved a higher return rate than the control form, especially in hard-to-enumerate areas (Dillman et al., 1993a).

  • 1993 (national sample): Implementation Test (IT) assessed the effects on mail return rates of using a prenotice letter, a reminder postcard, a stamped return envelope, and a replacement questionnaire. All treatments used the micro short form. The stamped return envelope had no effect; the prenotice letter and reminder postcard increased return rates by 6 and 8 percentage points, respectively. From comparing the IT treatment that combined a prenotice letter and reminder postcard with the SQT for the micro form, the Census Bureau estimated that the use of a second questionnaire increased the return rate by 10–11 percentage points.

  • 1993 (national sample): Appeals and Long-Form Experiment (ALFE) assessed the effects on mail return rates of motivational appeals and of different formats of the long form. Three motivational appeals were tested with the booklet short form used in the SQT, emphasizing, respectively, the benefits of the census, the confidentiality of the data, and the mandatory nature of responding. The first two appeals had no effect; the third increased return rates by 10–11 percentage points. The long-form portion of the experiment compared the 20-page 1990 census long form, a 28-page user-friendly booklet format, and a 20-page user-friendly row-and-column format. The user-friendly booklet increased return rates compared with the 1990 long form, but only in easy-to-enumerate areas; the row-and-column format had no effect. The design of the experiment did not permit separating the effects of length from format for the long forms tested (Treat, 1993).

  • 1993 (national sample): Spanish Forms Availability Test showed that offering a Spanish-language form increased mail return rates by 2–6 percentage points; however, households that completed the Spanish form had higher item nonresponse rates than households that completed the English form, and most Hispanic households completed the English form.

  • March 1995 (Oakland, CA; Paterson, NJ; Bienville, De Soto, Jackson, Natchitoches, Red River, and Winn Parishes, LA): “1995 Test Census” served as first major test of the Census Bureau’s planned Integrated Coverage Measurement (ICM), use of an independent postenumeration survey to statistically adjust totals via dual-system estimation. New Haven, CT, was originally designated as a test site, but the Bureau decided in September 1994 to scrap that site due to budget considerations. Pursuant to the Census List Improvement Act of 1994, the 1995 Test Census was also the first opportunity for the Census

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

Bureau to test an address file updated using the U.S. Postal Service’s Delivery Sequence File and a prototype Local Update of Census Addresses (LUCA) program (through which local officials were permitted to review housing unit addresses and maps) in the test jurisdictions. The Census Bureau also studied the extent to which mail returned as undeliverable actually corresponded to vacant or nonexistent units through a premailout postal carrier check similar to previous censuses and tests, through returned advance letters alerting households to the upcoming test, and through returned questionnaire packages. In terms of questionnaire design, the 1995 test used only one short-form design but tested several long-form alternatives ranging from 16 to 53 questions in order to assess differential response. The 1995 test also marked the first step toward what would later become known as service-based enumeration—the attempt to identify and locate homeless people through facilities where they receive services, rather than (or in addition to) nighttime deployment of enumerators, as done in previous censuses.

  • March–June 1996 (national sample of housing units from 1990 census mailback areas not already included in previous tests): National Content Survey testing 13 alternative questionnaire forms, seven short-form designs and six long-form designs. Some treatments tested alternative structures for the race and Hispanic-origin questions (varying their order or allowing for multirace reporting); long-form variations also presented different structures for collecting employment and lay-off data (Raglin, 1998b).

  • June 1996 (Paterson, NJ): field test of ICM procedures, referenced by Whitford (1996).

  • October 1996 (seven tracts in Chicago, IL; Pueblo of Acona Indian Reservation, NM; Fort Hall Reservation and trust lands, ID): Dubbed the “1996 Community Census,” the 1996 test served as a further demonstration of the Bureau’s ICM procedures. At least in the Chicago test site, the independent postenumeration survey was conducted on a 100 percent basis (i.e., an interview was conducted at every household generated by an independent listing of addresses) rather than a sample basis, as planned in the census; this permitted fuller study of the overlap between the initial census and postenumeration survey coverages and facilitated comparison of consistency of reporting on questions like race and Hispanic origin. Also in Chicago, the Census Bureau constructed an administrative records database from numerous sources (including Internal Revenue Service data and the Social Security Administration’s NUMIDENT file) for comparison with the test census results.

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Dress Rehearsals

In November 1997, a provision in P.L. 105–119 (Title II, § 209) directed that the Census Bureau “plan, test, and become prepared to implement a 2000 decennial census, without using statistical methods” (i.e., the Census Bureau’s plans for ICM and sampling rather than following up all nonresponding households), and further required that both adjusted and unadjusted counts be made available in the 2000 census or “any dress rehearsal or other simulation made in preparation for the 2000 decennial census.” As part of this compromise between the Census Bureau and Congress over fiscal year 1998 funding, the Bureau’s planned dress rehearsal for 2008 in three sites was recast as a major operational test and comparison of alternative census designs.

  • 2008 (Sacramento, CA): full census operational trial using both ICM and sampling for nonresponse follow-up.

  • 2008 (Columbia, SC, and 11 surrounding counties: Chester, Chesterfield, Darlington, Fairfield, Kershaw, Lancaster, Lee, Marlboro, Newberry, Richland, Union): full census operational trial using 100 percent nonresponse follow-up; a postenumeration survey was conducted to assess coverage error but not to produce adjusted estimates.

  • 2008 (Menominee County, WI, including the Menominee Indian Reservation): full census operational trial using ICM and 100 percent nonresponse follow-up.

SOURCES: Adapted from Kim et al. (1998); National Research Council (1995, 1997, 2004a); Raglin (1998a,b); U.S. General Accounting Office (1994); Whitford (1996).

A–6.b
Research, Experimentation, and Evaluation Program
Experiments
  • Census 2000 Alternative Questionnaire Experiment (AQE2000): this experiment used additional mailings and reinterview studies to manipulate three questionnaire design components:

    • Presentation of residence rules on the short form: determine whether providing a brief, reformatted version of the rules improves data quality.

    • Comparing the 1990 and 2000 census presentation of race and Hispanic-origin questions: the two censuses presented the questions in slightly different ways, in wording, format, and design.

    • Design of “skip to” and “go to” instructions in the census long form: determine whether respondents were able to navigate through the paper question correctly and efficiently.

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  • Results are reported in Gerber et al. (2002); Martin (2002); Martin et al. (2003); Redline et al. (2002).

  • Administrative Records Census 2000 Experiment (AREX 2000): initial planning for the 2000 census included experimentation with an administrative records census as a possible way to save costs; the idea had been raised but not endorsed by National Research Council (1995:Chap. 4) and National Research Council (1999:Chap. 5). The AREX 2000 experiment assembled national-level administrative records (unduplicated using SSNs) and assigned block-level geographic codes. Records for the five selected test sites were then extracted and tallied at the census block level. A separate branch of the experiment sought to reconcile administrative records with the Master Address File to generate block-level population and housing unit counts. The results of the experiment are reported in Bauder and Judson (2003); Berning (2003); Berning and Cook (2003); Heimovitz (2003); Judson and Bye (2003).

  • Social Security Number, Privacy Attitudes, and Notification Experiment (SPAN): related to the administrative records research, the SPAN experiment probed for behavioral and attitudinal data on public response to queries for their SSNs on census questionnaires. The experiment also tested public response to variations in wording in notices about Census Bureau use of administrative records, as well as surveying public concerns about privacy and confidentiality raised by use of administrative records. The results of the experiment are reported in Brudvig (2003); Guarino et al. (2001); Trentham and Larwood (2003).

  • Response Mode and Incentive Experiment (RMIE): this experiment studied the effectiveness of three electronic modes of data collection:

    • Operator telephone interview: also known as reverse computer-assisted telephone interview (reverse CATI); respondents were encouraged to call a toll-free telephone number, at which time a telephone interviewer administered the questionnaire.

    • Computer telephone interview: also known as the Automated Spoken Questionnaire; respondents were asked to call a toll-free telephone number, at which time the short-form questionnaire was administered by interactive voice response (an automated system).

    • Internet: Respondents were encouraged to answer the questionnaire using a web address provided in a cover letter.

The experiment also tested the impact on response of offering an incentive for completing the questionnaire (specifically, a telephone calling card valid for 30 minutes of free long-distance calling). The re-

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

sults are reported in Caspar (2003); Guarino (2001); Schneider et al. (2002).

  • Census 2000 Supplementary Survey (C2SS): the C2SS extended pilot work on the American Community Survey (ACS). In addition to data collection in 36 ongoing ACS test sites, the C2SS collected data in 1,203 additional counties. The C2SS was conducted as an experiment, with the intent of determining whether it is feasible to collect long-form-census data at the same time, but in a separate process from, the decennial census data collection. The Census Bureau concluded that this simultaneous collection is feasible and that ACS work is feasible for a full national sample; the results are reported in Griffin and Obenski (2001).

  • Ethnographic studies

    • Privacy Schemas and Data Collection: the goal of the experiment was to collect qualitative and attitudinal data on survey participation and response, including further probing of privacy concerns and elaborating reasons for choosing to participate in survey data collections. The results of the study are reported in Gerber (2003).

    • Complex Households and Relationships in the Decennial Census and Demographic Surveys: this ethnographic research project assembled six teams to study how well census methods, questions, and categories matched the diversity and experience of modern households. The six teams targeted particular ethnic or race groups: African Americans, Hispanics, Inupiaq Eskimos, Koreans, Navajos, and whites. The results are reported in Schwede (2003).

    • Generation X Speaks Out on Censuses, Surveys, and Civic Engagement: An Ethnographic Approach: this ethnographic study was intended to probe the civic engagement and attitude toward censuses (and surveys in general) among the Generation X population, those born between 1968 and 1979. In this age cohort, differences by other factors—socioeconomic background, ethnicity, immigrant status, and so forth—were also considered. Members of the subsequent Millennial generation (14–18 years of age) were also interviewed for comparison. The results are reported in Crowley (2003).

Evaluations

The Census Bureau’s original slate of evaluations for the 2000 census included 149 studies; in several waves during 2002, this list was “was refined and priorities reassessed due to resource constraints at the Census Bureau”

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

(U.S. Census Bureau, 2003), yielding a final set of 91 studies. However, some of the studies “cancelled” from the formal evaluation set were expedited and shifted to the research program surrounding the Executive Steering Committee for A.C.E. Policy, which was making recommendations on the adjustment of census figures for redistricting and other purposes.

  • Response Rates and Behavior Analysis (Series A)

    • Telephone Questionnaire Assistance (TQA) Operational Analysis: assess calling patterns and respondent use of Telephone Questionnaire Assistance and performance of operational system.

    • TQA Customer Satisfaction Survey: present results of quality assurance survey administered to TQA respondents.

    • Internet Data Collection Operational Analysis: evaluate frequency of completion of short-form questionnaires via the Internet.

    • Internet Web Site and Questionnaire Customer Satisfaction Survey: present results of customer satisfaction surveys.

    • Be Counted Campaign: evaluate impact on coverage of Be Counted forms, blank questionnaires that were made publicly available for people who believed they had not been counted in the census.

    • Language Program (Use of Non-English Questionnaires and Guides): document requests for non-English census forms and language assistance guides.

    • Response Methods for Selected Language Groups: examine how certain non-English-speaking groups (Spanish, Chinese, Korean, Vietnamese, and Russian) were enumerated.

    • Awareness and Participation in the Language Assistance Programs Among Selected Language Groups: examine household awareness of Census Bureau’s language assistance programs.

    • U.S. Postal Service Undeliverable Rates for Mailout Questionnaires: examine rate at which the Postal Service classified housing units as “undeliverable as addressed” and evaluate occupancy status of those units.

    • Detailed Reasons for Undeliverability of Census 2000 Mailout Questionnaires by the Postal Service: further analysis of reasons for questionnaires to some housing units being deemed “undeliverable as addressed,” including follow-up work by local census offices.

    • Mailback Response Rates: examine mail response rates by geography and questionnaire check-in dates.

    • Mail Return Rates: examine mail return rates by geography,

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  • and compare return rates based on housing unit demographic variables.

  • Puerto Rico Focus Groups on Why Households Did Not Mail Back the Census 2000 Questionnaire: examine reasons for low response to update-leave enumeration as applied for the first time in Puerto Rico.

  • Cancelled: Internet Questionnaire Assistance Operational Analysis.

  • Content and Data Quality (Series B)

    • Imputation Process for 100 Percent Household Population Items: examine national-level rates of assignments, allocations, and substitutions (using Census Bureau terminology).

    • Item Nonresponse Rates for 100 Percent Household Population Items: compare item nonresponse rates by form type and response mode.

    • Census Quality Survey to Evaluate Responses to the Census 2000 Question on Race: interpret results of a follow-up survey asking respondents to identify their race in two ways: choosing only one of several racial categories and choosing multiple. The objective is to use these results to help bridge 2000 census results (using multiple-race responses) with other data sources (where single-race responses may still prevail).

    • Content Reinterview Survey: Accuracy of Data for Selected Population and Housing Characteristics as Measured by Reinterview: report results of a follow-up survey, asking long-form-census respondents to resupply long-form data items; the results from the census and survey are then compared for consistency.

    • Master Trace Sample: archive and link the results for randomly selected census records across multiple operational databases, including address list information, data processing archives, enumerator information, and Accuracy and Coverage Evaluation data.

    • Match Study of Current Population Survey to Census 2000: discuss results of person-level match between 2000 census returns and the Current Population Survey, emphasizing differences in estimates of poverty and labor force status.

    • Puerto Rico Race and Ethnicity: examine responses to the race and Hispanic-origin questions from respondents living in Puerto Rico (the questions administered were identical to those asked in the rest of the nation).

    • Puerto Rico Focus Groups on the Race and Ethnicity Questions:

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  • results of focus groups conducted in Puerto Rico on perceptions of the race and Hispanic-origin questions.

  • Cancelled (6): Documentation of Characteristics and Data Quality by Response Type; Match Study of A.C.E. to Census to Compare Consistency of Race and Hispanic-Origin Responses; Housing Measures Compared to the American Housing Survey; Housing Measures Compared to the Residential Finance Survey; ACS Evaluation of Follow-Up, Edits, and Imputations; Comparisons of Income, Poverty, and Unemployment Estimates Between Census 2000 and Three Census Demographic Surveys.

  • Data Products (Series C)

    • Effects of Disclosure Limitation on Data Utility: evaluates refinements in the Census Bureau’s use of “data swapping” to prevent inadvertent disclosure of confidential census information; in particular, the evaluation was intended to focus on the impact of a region’s geographic structure and racial diversity on the effectiveness of data swapping.

    • Cancelled (2): Usability Evaluation of User Interface with American FactFinder; Data Products Strategy.

  • Partnership and Marketing (Series D)

    • Partnership and Marketing Program: study conducted by the National Opinion Research Center of the effectiveness of the Census Bureau’s marketing and advertising campaign.

    • Census in Schools/Teacher Customer Satisfaction Survey: study conducted by Macro International on the effect of offering Census-related educational materials to school teachers.

    • Survey of Partners/Partnership Evaluation: study conducted by Westat on the helpfulness of 2000 census materials provided to the Census Bureau’s local and community organization partners.

  • Special Places and Group Quarters (Series E)

    • Special Place/Group Quarters Facility Questionnaire (Mode Effect): evaluate effectiveness of the computer-assisted telephone interviewing (CATI) system or personal visit (PV) used to collect information on special places, based on personal visit reinterviews.

    • Decennial Frame of Group Quarters and Sources: evaluate content, coverage, and sources of the 2000 census roster of group quarters through comparison to other sources and business registers.

    • Group Quarters Enumeration: study aspects of group quarters enumeration, including counts of special places and group quar-

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  • ters and distribution of group quarters per special place and residents per group quarters.

  • Service-Based Enumeration: evaluate conduct of Service-Based Enumeration, which targeted populations in shelters, soup kitchens, regularly scheduled mobile food vans and targeted nonsheltered outdoor locations in March 2000.

  • Cancelled (3): Special Place/Group Quarters Facility Questionnaire (Operational Analysis); Special Place LUCA; Inventory Development Process for Service-Based Enumeration.

  • Address List Development (Series F)

    • Address Listing Operation and Its Impact on the Master Address File (MAF): assess quality of address listing in areas to be enumerated via update-leave (but not geographic database updates made at the same time).

    • LUCA 1998: evaluate operation that gave local and tribal governments opportunity to review address list entries in areas with predominantly city-style addresses, intended for mailout-mailback enumeration.

    • Block Canvassing Operation: assess quality of 100 percent field canvass of mailout-mailback areas (but not geographic database updates made at the same time).

    • LUCA 1999: evaluate operation that gave local and tribal governments opportunity to review block counts of housing units for areas with predominantly non-city-style addresses.

    • Update-Leave: assess enumeration procedure in which mail delivery was deemed to be problematic and in which direct visits from enumerators were to be used.

    • Urban Update-Leave: assess enumeration procedure used in selected areas in urban centers judged to be not amenable to mailout-mailback.

    • Update-Enumerate: assess enumeration procedure through which interviewers visited housing units once, to verify address list entries and to collect questionnaire information, rather than simply dropping a questionnaire for later return by mail.

    • List/Enumerate: assess enumeration procedure in which enumerators simultaneously listed housing units and collected questionnaire data.

    • Quality of the Geocodes Associated with Census Addresses: study status of addresses still deemed to be “missing” from the census after A.C.E. work had finished.

    • Block Splitting Operation for Tabulation Purposes: examine accuracy of block splitting operations, performed in some areas to

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  • tabulate data when governmental unit boundaries do not conform to census collection block boundaries.

  • Cancelled (6): Impact of the Delivery Sequence File Deliveries on the MAF; Evaluation of the MAF Using Earlier Evaluation Data; Criteria for the Initial Decennial MAF Delivery; Decennial MAF Update Rules; New Construction Adds; Overall MAF Building Process for Housing Units.

  • Field Recruiting and Management (Series G)

    • Staffing Programs: eventually issued in two parts, the evaluation focused on hiring programs for nonresponse follow-up, examining both the adequacy of recruitment as well as the impact of higher pay rates on productivity.

    • Cancelled: Operation Control System.

  • Field Operations (Series H)

    • Operational Analysis of Field Verification Operation for Non-ID Housing Units: report on the resolution of non-ID questionnaires—that is, those from the Be Counted or TQA operations (which were not keyed to a MAF ID) or questionnaires for which an address could not be verified.

    • Questionnaire Assistance Centers: report on effectiveness of walk-in assistance centers where respondents could be provided help in filling out the census questionnaire.

    • NRFU: principal evaluation of enumerator follow-up of nonresponding housing units, including determination of NRFU workloads, demographic profiles of the NRFU respondent population, and the distribution of partial interviews, refusals, and proxy responses.

    • NRFU Enumerator Training: assess the adequacy of training for the large temporary work corps hired to conduct nonresponse follow-up interviewing.

    • Operational Analysis of Enumeration of Puerto Rico: consider the effectiveness of enumeration in Puerto Rico, which was conducted using update-leave methodology in 2000.

    • Local Census Office Profile: operational summary of descriptive statistics on total housing units, average household size, mail return rate, and other information for each local census office.

    • Date of Reference for Respondents: examine discrepancies in respondents’ reported age and their date of birth and derive the average date of reference actually used by census respondents (in comparison to the official comparison date of April 1, 2000).

    • Cancelled (3): Use of 1990 Data for Census 2000 Planning; Local Census Office Delivery of Census 2000 Mailout Questionnaires

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

Returned by U.S. Postal Service With Undeliverable as Addressed Designation; Operational Analysis of Non-Type of Enumeration Area Tool Kit Methods.

  • Coverage Improvement (Series I)

    • Coverage Edit Follow-Up: assessment on program to resolve discrepancies between reported household size and actual number of people coded on the form, and to follow up on additional household members beyond the six household members whose date could be recorded on the standard questionnaire.

    • NRFU Whole Household Usual Home Elsewhere Probe: study resolution of cases in NRFU, List/Enumerate, and Update-Enumerate operations in which a respondent indicated that their current address was a seasonal or vacation home and their usual home was elsewhere. The evaluation was to consider how many such forms were filed and how well the reported “usual home” data compared with the MAF or other census enumerations.

    • NRFU Mover Probe: report on in-movers (people who moved to their current households after Census Day and detected in follow-up or direct enumeration methods), inquiring whether they completed a census form at their Census Day address.

    • Coverage Improvement Follow-Up: report on the phase of follow-up that verified and collected information from units flagged as vacant or delete earlier in follow-up; units added in the New Construction operation; and other late additions, blank mail returns, and lost mail returns.

    • Coverage Gain from Coverage Questions on Enumerator Completed Questionnaire: consider effectiveness of change in approach from 1990, obtaining information on missing or erroneously included persons through a set of questions rather than a recitation of residence definitions as in 1990.

    • Cancelled: Comparative Study of Coverage, Rostering Methods, and Household Composition in the Current Population Survey and the Census.

  • Ethnographic Studies (Series J)

    • Ethnographic Social Network Tracing: use of social network analysis to study patterns of residential mobility and its impact on the census count, as well as how reliably people can be identified from their social networks.

    • Comparative Ethnographic Research on Mobile Populations: report on characteristics and challenges in enumerating select hard-to-count groups:urban gang members, Irish Travelers in Missis-

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  • sippi and Georgia, Arizona “snowbirds,” and American Indian populations in the San Francisco area.

  • Colonias on the U.S./Mexico Border: ethnographic study of challenges (e.g., irregular housing stock, complex household structures, heightened concerns about confidentiality) in residential subdivisions along the U.S./Mexico border that are generally un-incorporated, low income, and difficult to reach in survey research.

  • Data Capture (Series K)

    • Data Capture Audit Resolution Process: document results of audit of data capture processing, including failure reasons, form type, and differential effects by data capture site.

    • Quality of the Data Capture System and the Impact of Questionnaire Capture and Processing on Data Quality: study impact of the automated data capture on data quality and compares data quality by the questionnaire item and form type, among other variables.

    • Impact of Data Capture Errors on Autocoding, Clerical Coding and Autocoding Referrals in Industry and Occupation Coding: study of use of automated data capture routines, as well as clerical review, in parsing reported industry and occupation information and coding into standardized categories.

    • Cancelled (4): Analysis of Data Capture System 2000 Keying Operations; Synthesis of Results from Data Capture Audit Studies; Analysis of the Interaction Between Aspects of Questionnaire Design, Printing, and Completeness with Data Capture; Performance of the Data Capture System 2000.

  • Processing Systems (Series L)

    • Decennial Response File Stage 2 Linking and Setting of Expected Household Population: document the linkages drawn between census forms and returns during initial response file processing, as well as the accuracy of algorithms for calculating expected household size.

    • Operational Assessment of Primary Selection Algorithm Results: study of unduplication algorithm, used to resolve cases in which several questionnaires were received from the same address (MAF ID number).

    • Resolution of Multiple Census Returns Using Reinterview: further research regarding the accuracy of the Primary Selection Algorithm, using a follow-up reinterview on a sample of addresses affected by the algorithm.

    • Census Unedited File Creation: document results of process for

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  • determining final housing unit inventory, done by merging information on the processed Decennial Response File (census returns) with the Decennial MAF.

  • Beta Site: operational analysis of software evaluation facility within the Census Bureau that is responsible for integrating software systems of the census as well as to conduct security testing.

  • Cancelled: Invalid Return Detection.

  • Quality Assurance Evaluations (Series M)

    • Evaluation of Quality Assurance Philosophy and Approach Used for the Address List Development and Enumeration Operations: document operational experiences with quality assurance approach.

    • Effectiveness of Existing Variables in the Model Used to Detect Discrepancies During Reinterview, and the Identification of New Variables: document specific quality assurance measure used in cases in which enumerators’ work suggested discrepancies.

  • A.C.E. Survey Operations (Series N)

    • Contamination of Census Data Collected in A.C.E. Blocks: study intended to determine success in keeping census and A.C.E. operations independent.

    • Discrepant Results in A.C.E.: examine how quality assurance steps identified interviewers who entered discrepant data in the A.C.E. interview.

    • Evaluation of Matching Error: assess error in the matching process used to identify missed or erroneously enumerated persons between the census and the A.C.E.

    • Targeted Extended Search Block Cluster Analysis: study result of expending search for possible matches from adjoining blocks for all sample clusters (as in 1990) to a broader search (still targeted to clusters judged most likely to benefit from additional searching.

    • Field Operations and Instruments for A.C.E.: assess quality of housing unit and person coverage in A.C.E. operations by examining the quality of the (independent) address listing, effect of follow-up interviewing, and noninterview rates.

    • Cancelled (16, most converted to separate A.C.E. evaluation program): Analysis of Listing Future Construction and Multi-Units in Special Places; Analysis of Relisted Blocks; Analysis of Blocks With No Housing Unit Matching; Analysis of Blocks Sent Directly for Housing Unit Follow-Up; Analysis of Person Interview With Unresolved Housing Unit Status; Analysis on the Effects of Census Questionnaire Data Capture in A.C.E.; Analysis of

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

the Census Residence Questions Used in A.C.E.; Analysis of the Person Interview Process; Extended Roster Analysis; Matching Stages Analysis; Analysis of Unresolved Codes in Person Matching; Outlier Analysis in the A.C.E.; Impact of Targeted Extended Search; Effect of Late Census Data on Final Estimates; Group Quarters Analysis; Analysis of Mobile Homes.

  • Coverage Evaluations of the Census and of the A.C.E. Survey (Series O)

    • Housing Unit Coverage Study: examine net coverage rate, gross omission rate, and erroneous enumeration rate of housing units, at various geographic levels as well as by A.C.E. poststrata.

    • Analysis of Conflicting Households: report on resolution of cases in A.C.E. housing unit matching when the census and the A.C.E. listed two entirely different families for the same unit.

    • Analysis of Proxy Data in the A.C.E.: study accuracy (match rates and erroneous enumeration rates) of data collected from proxy respondents: persons who are not members of the household, such as neighbors or landlords.

    • Housing Unit Duplication in Census 2000: study characteristics of duplicate housing units, attempting to identify census operations most likely to produce housing unit duplication.

    • Analysis of Deleted and Added Housing Units in the Census Measured by the A.C.E.: evaluate housing unit coverage on the early Decennial MAF.

    • Consistency of Census Estimates with Demographic Benchmarks: comparison of census results with demographic analysis benchmarks.

    • Cancelled (20, most converted to separate A.C.E. evaluation program or combined with other evaluations): Type of Enumeration Area Summary; Coverage of Housing Units in the Early Decennial MAF; P-Sample Nonmatches Analysis; Person Coverage in Puerto Rico; Housing Unit Coverage in Puerto Rico; Geocoding Error Analysis; E-Sample Erroneous Enumeration Analysis; Analysis of Nonmatches and Erroneous Enumerations Using Logistic Regression; Analysis of Various Household Types and Long-Form Variables; Measurement Error Reinterview Analysis; Impact of Housing Unit Coverage on Person Coverage Analysis; Person Duplication; Analysis of Households Removed Because Everyone in the Household Is Under 16 Years of Age; Synthesis of What We Know About Missed Census People; Implications of Net Census Undercount on Demographic Measures and Program Uses; Evaluation of Housing Units Coded as Erroneous Enumerations; Analysis of Insufficient Information for Matching and Follow-Up;

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

Evaluation of Lack of Balance and Geographic Errors Affecting Person Estimates; Mover Analysis; Analysis of Balancing in the Targeted Extended Search.

  • A.C.E. Survey Statistical Design and Estimation (Series P)

    • Cancelled (5, most converted to separate A.C.E. evaluation program): Measurement of Bias and Uncertainty Associated with Application of the Missing Data Procedures; Synthetic Design Research/Correlation Bias; Variance of Dual System Estimates and Adjustment Factors; Overall Measures of A.C.E. Quality; Total Error Analysis.

  • Organization, Budget, and Management Information System (Series Q)

    • Management Processes and Systems: study the Census Bureau’s organizational structure and decision-making processes, as well as its interaction with the Census Monitoring Board, Congress, the General Accounting Office, and other outside interests.

  • Automation of Census Processes (Series R); reviews of technical systems conducted by Titan Corporation

    • Telephone Questionnaire Assistance: toll-free service provided by a commercial phone center to answer questions about the census and the questionnaire.

    • Coverage Edit Follow-Up: program to resolve count discrepancies and obtain missing data for large households.

    • Internet Questionnaire Assistance: system that allowed respondents to use the Census Bureau’s Internet site to ask questions and receive answers about the census questionnaire or other census-related information.

    • Internet Data Collection: system that offered census short-form respondents the opportunity to respond via the Internet, using the 22-digit ID number found on their mailed census form.

    • Operations Control System 2000: system for control, tracking, and progress reporting for all field operations conducted for the census.

    • Laptop Computers for A.C.E.: systems used in A.C.E. follow-up interviewing.

    • A.C.E. Control System: tracking and control system behind the A.C.E. field operations.

    • Matching and Review Coding System for A.C.E.: system used to match A.C.E. returns to census records.

    • Pre-Appointment Management System/Automated Decennial Administrative Management System: system used for administrative management in the census, including tracking and processing of temporary enumerators, payroll, and background checks.

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
  • American FactFinder: systems developed to provide access to census data via the Internet at http://factfinder.census.gov.

  • Management Information System 2000: systems used to manage census operations, including tracking of dates and budgets and creation of progress reports of current status during census operations.

  • Data Capture: systems developed for full electronic data capture and imaging of census questionnaires, using optical mark and optical character recognition.

SOURCE: Adapted from National Research Council (2004a:App. I).

A–7
PRINCIPAL PRETESTS AND EXPERIMENTS CONDUCTED PRIOR TO THE 2010 CENSUS

A–7.a
Pretests of Census Operations and Questionnaires
  • 2002 (Gloucester County, VA): pilot testing of use of Census TIGER maps on a handheld computer (Pocket PC-class) by census interviewers. The tests involved only locating particular address features on the small-screen map, not using the computer map to navigate a route or the collection of GPS coordinates.

  • February–April 2003 (national sample): the 2003 National Census Test was administered to a national sample of about 250,000 households, drawn from the set of households that was enumerated using mailout-mailback methodology in the 2000 census. Strictly mail-based, the 2003 test involved no field follow-up component. The test focused primarily on two issues:

    • Response mode and contact strategies: different experimental groups were offered the opportunity to reply by mail (traditional method), Internet, or interactive voice response (IVR, an automated telephone system). Groups also varied as to whether these response modes were offered as a choice or whether they were “pushed” (e.g., providing Internet directions but no actual paper questionnaire in the mailing). Finally, contact strategies (including targeted replacement questionnaires and reminder postcards) were also varied. This component of the test involved eight experimental groups, one with 20,000 households and the other seven with 10,000 households each.

    • Race and ethnicity (Hispanic-origin) question wording: seven treatment groups of 20,000 households each received different variations on the wording and arrangement of questions on race and Hispanic origin. Experimental settings included whether “some other race” was offered as a choice in the categories for

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

race, whether wording was slightly revised to ask respondents if they are “Hispanic” or if they are “of Hispanic origin,” and whether instructions explicitly directed respondents to answer both questions.

The test was rounded out by a control group of 20,000 households; this group’s questionnaire included the race and Hispanic-origin questions worded as they were in the 2000 census (unlike the 2000 census context, the 2003 test control group households were eligible for a replacement questionnaire in lieu of nonresponse follow-up).

The samples for all groups were stratified by response rate in the 2000 census, in which the classification was a grouping into “high” and “low” response groups based on a selected cut-off. Martin et al. (2003:11) comment that the low-response strata “included areas with high proportions of Blacks and Hispanics and renter-occupied housing units” and further comment that addresses in low-response areas were oversampled. Still, it is unclear whether the sample design generated enough coverage in Hispanic communities to facilitate conclusive comparisons—that is, whether it reached enough of a cross-section of the populace and a sufficiently heterogeneous mix of Hispanic nationalities and origins to gauge sensitivity to very slight and subtle changes in question wording.

With regard to the response mode and contact strategy portion of the test, results reported by Treat et al. (2003) suggest that multiple response mode options may change the distribution of responses by mode—shifting some would-be mail responses to Internet, for example. However, the addition of choices does not generally increase cooperation overall. The experience of the 2003 test suggests serious difficulties with the interactive voice response option; 17–22 percent of IVR attempts had to be transferred to a human agent when the system detected that the respondent was having difficulty progressing through the IVR questionnaire. Moreover, rates of item nonresponse were greater for IVR returns than for the (paper response) control group. Internet returns, by comparison, experienced higher item response rates than the control. As indicated in past research, reminder postcards and replacement questionnaires had a positive effect on response.

Martin et al. (2003) report that the race and Hispanic-origin question segment of the test showed mixed results. Predictably, elimination of “some other race” as a response category reduced “some other race” responses considerably, by 17.6 percent (i.e., Hispanic respondents apparently declined to write in a generic response like “Hispanic” or “other” if “some other race” was not a formal choice). The Bu-

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

reau concluded that the 17.6 percent decline in generic race reporting “more than offset” the impact of a 6.4 percent increase in the estimated number of Hispanics declining to answer the race question altogether (Martin et al., 2003:15). Adding examples of ancestry groups (e.g., Salvadoran, Mexican, Japanese, Korean) boosted the reporting of detailed origins among Hispanics, Asians, and Native Hawaiian or Other Pacific Islanders. Treatment groups for which instructions were revised, instructing respondents to answer both the race and Hispanic-origin questions, produced the most puzzling results; levels of missing data on one or both questions increased, as did the percentage reporting themselves as Native Hawaiian or Other Pacific Islanders (relative to the control group).

  • February–July 2004 (7 neighborhoods of Queens County, NY [Astoria, Corona, Elmhurst, East Elmhurst, Jackson Heights, Long Island City, and part of Woodside]; Colquitt, Thomas, and Tift Counties, GA): the 2004 test was intended to test a package of new procedures and technologies in both a high-density urban site and a rural site. Lake County, IL, was originally designated a test site but was dropped due to constraints in funding for fiscal year 2004. Intended to include approximately 200,000 housing units, the test centered around a few major topics:

    • Handheld devices: the test marked the Census Bureau’s first attempt to use handheld computers, equipped with GPS receivers, for nonresponse follow-up interviewing. The handhelds developed for use in the 2004 test were pieced together from commercial, off-the-shelf components; although the test included some advanced workflows (e.g., transmitting questionnaire data directly from enumerators’ devices to headquarters and pushing enumerator assignments directly to individual handhelds, without filtering through regional or local offices), some parts of case management and assignment were still done using paper reports.

    • Further testing of race and Hispanic-origin question wording: the 2004 test permitted continuation of work from the 2003 test.

    • Special place/group quarters definition: the 2004 test field exercises allowed Bureau staff to test the aptness of revised definitions of group quarters (nonhousehold) populations.

In addition to dropping the Illinois test site, other planned parts of the 2004 test were eliminated from the test plan, including an attempt to target a mailing of dual-language (English and Spanish) questionnaires to certain households and to target canvassing methods for updating the MAF.

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

The 2004 field test also provided the first chance to test a version of the Bureau’s planned Coverage Follow-Up (CFU) operation, a combination of coverage improvement programs from previous censuses and computerized matching to detect potential census duplicates. Given the relatively small size of the test, the Bureau was able to perform follow-up on all eligible cases and to conduct a post hoc clerical review of cases to specify the likely source of duplication. Follow-up interviews were conducted both by telephone (experienced interviewers) and field staff (temporary, relatively novice interviewers).

  • July 2004 (France, Kuwait, and Mexico): Overseas Enumeration Test meant to assess the feasibility of a count of all Americans living overseas, motivated by congressional interest. Interest in the issue had been heightened by one of Utah’s legal challenges to the 2000 census (having been edged for the 435th seat in the U.S. House of Representatives by North Carolina) concerning the counting of missionaries of the Church of Jesus Christ of Latter-day Saints. To carry out the test, the Bureau adopted techniques similar to those used to elicit overseas resident counts in the 1960 and 1970 censuses: persons living overseas had to make contact with a U.S. embassy or consular office in order to obtain the census questionnaire. Some publicity about the test in the selected countries (France, Kuwait, and Mexico) was made in English-language newspapers and media. Costing approximately $8 million, the test yielded very low response: 5,390 questionnaires total, compared to 520,000 questionnaires printed for the test and rough estimates of on the order of 1.15 million American citizens in the test countries. Although the 2004 effort was originally intended to be repeated in 2006, no funding for the overseas test was included in the Bureau’s appropriations for that year.

  • August–September 2005 (national sample): a second, mail-only National Census Test in 2005 involved only variations in questionnaire design. The test reached a sample of about 420,000 households and was intended to simultaneously address several major objectives: (1) test revised questions on tenure, relationship, age, date of birth, race, and Hispanic origin; (2) test respondent friendliness of new designs of mail and Internet questionnaires; (3) compare revised versions of the Question 1 household count item, including residence rules instructions, and further test coverage probe questions selected from those tested in 2003; (4) test use of a replacement questionnaire; and (5) test use of a bilingual English-Spanish questionnaire. All of these objectives were intended to be covered by a set of 20 experimental treatment groups within a single-panel test; the control group included some items worded as they had been in the 2000 census, but other items used versions tested in 2003 or 2004. Although there was no field

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

follow-up, a sample of respondents was reinterviewed by telephone to assess the consistency of household rostering. Logistically, the test encountered problems as it coincided with the impact of Hurricanes Katrina and Rita and the disruption or suspension of mail service in the Gulf Coast area; the test was also complicated by the decision to keep the Internet questionnaire English-only and in a single format that lacked the other experimental features, preventing Internet returns from being used in studying some treatments.

  • Early 2006 (national sample): ad hoc Short Form Mail Experiment with a planned sample size of about 24,000 addresses from the U.S. Postal Service’s Delivery Sequence File. Not originally part of the testing plan for the 2010 census, the Census Bureau sought authority for the test in a Federal Register notice in October 2005. As described in that notice (Federal Register, October 5, 2005, p. 58181), this special mailout test was developed based on three objectives:

    • “Evaluate the effects of the wording of the instruction about who to list as Person 1” on the questionnaire (the householder, with whom the reported relationships of other household members are defined);

    • “Evaluate the proportion of respondents who forget to enumerate themselves by asking them to provide their personal information at the end of the form” (with “personal information” described as name, phone number, and proxy status—that is, whether the respondent is completing a form for someone else); and

    • “Evaluate how a compressed schedule with a fixed due date impacts unit response patterns.” (The notice specified only that the “compressed schedule” would change from questionnaires being “mailed 2 weeks before ‘Census Day’ ” to “households receiv[ing] the questionnaires a few days before ‘Census Day.’ ”)

The Census Bureau divided the sample into four treatment groups (Federal Register, October 5, 2005, p. 58181):

  • Group 1. Housing units in this treatment group will receive questionnaires with the same wording for the Person 1 instruction that we used in the Census 2000 questionnaire. In the Final Question, respondents will be asked to provide their name, telephone number and proxy information. The mail out schedule will be the conventional schedule. The questionnaire will be mailed two weeks before “Census Day”, and there will be no explicit deadline.

  • Group 2. Housing units in this treatment group will receive questionnaires with the revised wording for the Person 1 in-

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

struction. In the Final Question, respondents will be asked to provide their name, telephone number and proxy information. The mailout schedule will be the conventional schedule. The questionnaire will be mailed two weeks before “Census Day” and there will be no explicit deadline.

  • Group 3. Housing units in this treatment group will receive questionnaires with the revised wording for the Person 1 instruction. In the Final Question, respondents will be asked to check over their answers before considering the survey complete. The mailout schedule will be the conventional schedule. The questionnaire will be mailed two weeks before “Census Day” and there will be no explicit deadline.

  • Group 4. Housing units in this treatment group will receive questionnaires with the revised wording for the Person 1 instruction. In the Final Question, respondents will be asked to check over their answers before considering the survey complete. The mailout schedule will be compressed, so that the survey is received closer to “Census Day” and an explicit due date will be provided.

  • April 2006 (part of Travis County, TX, including the cities of Austin and Pflugerville; Cheyenne River Indian Reservation, SD): a second operational test involving field work, the 2006 Census Test used mailout-mailback (with field nonresponse follow-up) in the Texas site and enumerator visits (no mail) in the South Dakota site. As in 2004, handheld computers were used for enumerator interviews and were assembled from off-the-shelf components; in 2006, the new aspect of handhelds being tested was the delivery of maps to enumerators via the devices. The Census Bureau’s list of definitions of group quarters was expanded following the 2004 test, and the usefulness of this revised set was assessed in Travis County.

A–7.b
Dress Rehearsal
  • April 2008 (San Joaquin County, CA; nine-county area surrounding Fayetteville, NC [Chatham, Cumberland, Harnett, Hoke, Lee, Montgomery, Moore, Richmond, and Scotland Counties, including Fort Bragg and Pope Air Force Base]): as a full rehearsal of census activities, the 2008 dress rehearsal actually began operations in spring 2007 with address canvassing. This activity was the first operational trial of custom handheld devices designed by Harris Corporation and its subcontractors under the Field Data Collection Automation (FDCA) contract; problems encountered in use of the handhelds precipitated a crisis in funding and a “replan” of FDCA and core census operations in the first half of 2008.

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

SOURCES: Adapted from “2010 Census: How We Prepare for 2010” (http://2010.census.gov/2010census/about_2010_census/007623.html); for the 2006 Short Form Mail Experiment, Federal Register, October 5, 2005, pp. 58180–58182; Hill et al. (2006); Karl et al. (2005); Knight et al. (2005); National Research Council (2004b:Boxes 9.1 and 9.2); Pennington (2005); Tancreto (2006).

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×

This page intentionally left blank.

Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 115
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 116
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 117
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 118
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 119
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 120
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 121
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 122
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 123
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 124
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 125
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 126
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 127
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 128
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 129
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 130
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 131
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 132
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 133
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 134
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 135
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 136
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 137
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 138
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 139
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 140
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 141
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 142
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 143
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 144
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 145
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 146
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 147
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 148
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 149
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 150
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 151
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 152
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 153
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 154
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 155
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 156
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 157
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 158
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 159
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 160
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 161
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 162
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 163
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 164
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 165
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 166
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 167
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 168
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 169
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 170
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 171
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 172
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 173
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 174
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 175
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 176
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 177
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 178
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 179
Suggested Citation:"Appendix A: Past Census Research Programs." National Research Council. 2010. Envisioning the 2020 Census. Washington, DC: The National Academies Press. doi: 10.17226/12865.
×
Page 180
Next: Appendix B: 2010 Census Program of Evaluations and Experiments »
Envisioning the 2020 Census Get This Book
×
Buy Paperback | $98.00 Buy Ebook | $79.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Planning for the 2020 census is already beginning. This book from the National Research Council examines several aspects of census planning, including questionnaire design, address updating, non-response follow-up, coverage follow-up, de-duplication of housing units and residents, editing and imputation procedures, and several other census operations.

This book recommends that the Census Bureau overhaul its approach to research and development. The report urges the Bureau to set cost and quality goals for the 2020 and future censuses, improving efficiency by taking advantage of new technologies.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!