1
Introduction

The Census of Population and Housing is carried out in the United States every 10 years, and the next census is scheduled to begin its mailout-mailback operations in March 2010. For at least the past 50 years, each decennial census has been accompanied by a research program of evaluation or experimentation. The Census Bureau typically refers to a census “experiment” as a study involving field data collection—typically carried out simultaneously with the decennial census itself—in which alternatives to census processes currently in use are assessed for a subset of the population. By comparison, census “evaluations” are usually post hoc analyses of data collected as part of the decennial census process to determine whether individual steps in the census operated as expected. Collectively, census experiments and evaluations are designed to inform the Census Bureau as to the quality of the processes and results of the census, as well as to help plan for modifications and innovations that will improve the (cost) efficiency and accuracy of the next census. The Census Bureau is currently developing a set of evaluations and experiments to accompany the 2010 census, which the Bureau refers to as the 2010 Census Program for Evaluations and Experiments or CPEX.


These two activities of the more general census research program are concentrated during the conduct of the census itself, but census-related research activities continue throughout the decade. Traditionally, the Census Bureau’s intercensal research has been focused on a series of census tests, some of which are better described as “test censuses” because they are conducted in specific geographic areas and can include fieldwork (e.g., in-person follow-up for nonresponse) as well as contact through the mail or other means. The sequence of tests usually culminates in a dress rehearsal two years prior to the decennial census. In addition to the test censuses, the Census Bureau has also conducted some smaller scale experimental data collections during the intercensal period.

CHARGE TO THE PANEL

As it began to design its CPEX program for 2010, the Census Bureau requested that the Committee on National Statistics of the National Academies convene the Panel on the Design of the 2010 Census Program of Evaluations and Experiments. The panel’s charge is to:

… consider priorities for evaluation and experimentation in the 2010 census. [The panel] will also consider the design and documentation of the Master Address File and operational databases to facilitate research and evaluation, the design of experiments to embed in the 2010 census, the design of evaluations of the 2010 census processes, and what can be learned from the pre-2010 testing that was conducted in 2003-2006 to enhance the



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 7
1 Introduction The Census of Population and Housing is carried out in the United States every 10 years, and the next census is scheduled to begin its mailout-mailback operations in March 2010. For at least the past 50 years, each decennial census has been accompanied by a research program of evaluation or experimentation. The Census Bureau typically refers to a census “experiment” as a study involving field data collection—typically carried out simultaneously with the decennial census itself—in which alternatives to census processes currently in use are assessed for a subset of the population. By comparison, census “evaluations” are usually post hoc analyses of data collected as part of the decennial census process to determine whether individual steps in the census operated as expected. Collectively, census experiments and evaluations are designed to inform the Census Bureau as to the quality of the processes and results of the census, as well as to help plan for modifications and innovations that will improve the (cost) efficiency and accuracy of the next census. The Census Bureau is currently developing a set of evaluations and experiments to accompany the 2010 census, which the Bureau refers to as the 2010 Census Program for Evaluations and Experiments or CPEX. These two activities of the more general census research program are concentrated during the conduct of the census itself, but census-related research activities continue throughout the decade. Traditionally, the Census Bureau’s intercensal research has been focused on a series of census tests, some of which are better described as “test censuses” because they are conducted in specific geographic areas and can include fieldwork (e.g., in-person follow-up for nonresponse) as well as contact through the mail or other means. The sequence of tests usually culminates in a dress rehearsal two years prior to the decennial census. In addition to the test censuses, the Census Bureau has also conducted some smaller scale experimental data collections during the intercensal period. CHARGE TO THE PANEL As it began to design its CPEX program for 2010, the Census Bureau requested that the Committee on National Statistics of the National Academies convene the Panel on the Design of the 2010 Census Program of Evaluations and Experiments. The panel’s charge is to: . . . consider priorities for evaluation and experimentation in the 2010 census. [The panel] will also consider the design and documentation of the Master Address File and operational databases to facilitate research and evaluation, the design of experiments to embed in the 2010 census, the design of evaluations of the 2010 census processes, and what can be learned from the pre-2010 testing that was conducted in 2003-2006 to enhance the 7

OCR for page 7
testing to be conducted in 2012-2016 to support census planning for 2020. Topic areas for research, evaluation, and testing that would come within the panel’s scope include questionnaire design, address updating, nonresponse follow-up, coverage follow-up, unduplication of housing units and residents, editing and imputation procedures, and other census operations. Evaluations of data quality would also be within scope. . . . More succinctly, the Census Bureau requests that the panel: • Review proposed topics for evaluations and experiments; • Assess the completeness and relevance of the proposed topics for evaluation and experimentation; • Suggest additional research topics and questions; • Recommend priorities; • Review and comment on methods for conducting evaluations and experiments; and • Consider what can be learned from the 2010 testing cycle to better plan research for 2020. The panel is charged with evaluating the 2010 census research program, primarily in setting the stage for the 2020 census. As the first task, the panel was asked to review an initial list of research topics compiled by the Census Bureau, with an eye toward identifying priorities for specific experiments and evaluations in 2010. This first interim report by the panel uses the Bureau’s initial suggestions for consideration as a basis for commentary on the overall shape of the research program surrounding the 2010 census and leading up to the 2020 census. It is specifically the goal of this report to suggest priorities for the experiments to be conducted in line with the 2010 census because they are the most time-sensitive. To some observers, a two-year time span between now and the fielding of the 2010 census may seem like a long time; in the context of planning an effort as complex as the decennial census, however, it is actually quite fleeting. Experimental treatments must be specified, questionnaires must be tested and approved, and systems must be developed and integrated with standard census processes—all at the same time that the Bureau is engaged in an extensive dress rehearsal and final preparations for what has long been the federal government’s largest and most complex non-military operation. Accordingly, the Census Bureau plans to identify topics for census experiments to be finalized by winter 2007 and to have more detailed plans in place in summer 2008; this report is an early step in that effort. Although this report is primarily about priorities for experiments, we also discuss the evaluation component of the CPEX. This is because even the basic possibilities for specific evaluations depend critically on the data that are collected during the conduct of the census itself. Hence, we offer comments about the need to finalize plans for 2010 data collection—whether in house by the Census Bureau or through its technical contractors—in order to facilitate a rich and useful evaluation program. 8

OCR for page 7
We will continue to study the CPEX program over the next few years, and we expect to issue at least one more report; these subsequent reports will respond to the Bureau’s evolving development of the CPEX plan as well as provide more detailed guidance for the conduct of specific evaluations and experiments. BACKGROUND: EXPERIMENTS AND EVALUATIONS IN THE 2000 CENSUS As context for the discussion that follows and to get a sense of the scope of CPEX, it is useful to briefly review the experiments and evaluations of the previous census. The results of the full Census 2000 Testing, Experimentation, and Evaluation Program are summarized by Abramson (2004). Experiments The Census Bureau carried out five experiments in conjunction with the 2000 census. Several ethnographic studies were also conducted during the 2000 census; about half of these were considered to be part of the formal evaluation program, whereas the others were designated as a sixth experiment. Census 2000 Alternative Questionnaire Experiment (AQE2000): AQE2000 comprised three experiments for households in the mailout-mailback universe of the 2000 census. The skip instruction experiment examined the effectiveness of different methods for guiding respondents through an alternative long-form questionnaire with skip patterns. The residence instructions experiment tested various methods (format, presentation, and wording of instructions) for representing the decennial census residence rules on the questionnaire. The hope was to improve within-household coverage by modifying the roster instructions. Finally, the race and Hispanic origin experiment compared the 1990 race and Hispanic origin questions with the questions on the Census 2000 short form, specifically assessing the effect of permitting the reporting of more than one race and reversing the sequence of the race and Hispanic origin items. This experiment is summarized by Martin et al. (2004). Administrative Records Census 2000 Experiment (AREX 2000): AREX 2000 was designed to assess the value of administrative records data in conducting an administrative records census. As a by-product, it also provided useful information as to the value of administrative records in carrying out or assisting in various applications in support of conventional decennial census processes. AREX 2000 used administrative records to provide information on household counts, date of birth, race, Hispanic origin, and sex, linked to a corresponding block code. The test was carried out in five counties in two sites, Baltimore City and Baltimore County, Maryland, and Douglas, El Paso, and Jefferson counties in Colorado, with approximately 1 million housing units and a population of approximately 2 million. The population coverage for the more thorough of the schemes tested was between 96 and 102 percent relative to the Census 2000 counts for the five test site counties. However, the AREX 2000 and the census counted the same number of people in a housing unit only 9

OCR for page 7
51.1 percent of the time. They differed by at most one person only 79.4 percent of the time. The differences between the administrative records–based counts and the census counts were primarily attributed to errors in address linkage and typical deficiencies in administrative records (missed children, lack of representation of special populations, and deficiencies resulting from the time gap between the administrative records extracts and Census Day). Another important finding was that administrative records are not currently a good source of data for race and Hispanic origin, and the models used to impute race and Hispanic origin were not sufficient to correct the deficiencies in the data. The experiment is summarized by Bye and Judson (2004). Social Security Number, Privacy Attitudes, and Notification Experiment (SPAN): This experiment assessed the public’s attitudes regarding the census and its uses, trust and privacy issues, the Census Bureau’s confidentiality practices, possible data sharing across federal agencies, and the willingness of individuals to provide their Social Security number on the decennial census questionnaire. In addition, the public’s attitude toward the use of administrative records in taking the census was also assessed. The experiment is described in detail by Larwood and Trentham (2004). Response Mode and Incentive Experiment (RMIE): The RMIE investigated the impact of three computer-assisted data collection techniques: computer-assisted telephone interviewing (CATI), the Internet, and interactive voice response, on the response rate and quality of the data collected. The households in six panels were given the choice of providing their data via the usual paper forms or by one of these alternate modes. Half of the panels were offered an incentive—a telephone calling card good for 30 minutes of calls—for using the alternate response mode. In addition, the experiment included a nonresponse component designed to assess the effects on response of an incentive to use alternative response mode options among a sample of census households that failed to return their census forms by April 26, 2000. This was to test the effect of these factors on a group representing those who would be difficult to enumerate. A final component of this experiment involved interviewing households assigned to the Internet mode who opted to complete the traditional paper census form to determine why these households did not use the Internet. One of the findings was that the Internet provided relatively high-quality data. However, among respondents who were aware of the Internet option, 35 percent reported that they believed the paper census form would be easier to complete. Other reasons for not using the Internet include no access to a computer, concerns about privacy, “forgot the Internet was an option,” and insufficient knowledge of the Internet. The incentive did not increase response but instead redirected response to the alternate modes. The CATI option seemed to be preferred over the other two alternate modes. Caspar (2004) summarizes the experiment’s results. Census 2000 Supplementary Survey (C2SS): By 1999, the basic notion that the new American Community Survey (ACS) would take the role of the traditional census long- form sample had been established (this is discussed in more detail in the next section). ACS testing had grown to include fielding in about 30 test sites (counties), with full-scale implementation planned for the 2000-2010 intercensal period. Hence, the Census Bureau was interested in some assessment of the operational feasibility of conducting a large- 10

OCR for page 7
scale ACS at the same time as a decennial census. Formally an experiment in the 2000 census program, the C2SS escalated ACS data collection to include more than one-third of all counties in the United States; this step-up in collection—while well short of full- scale implementation—offered a chance to compare ACS estimates with those from the 2000 census. Operational feasibility was defined as the C2SS tasks being executed on time and within budget with the collected data meeting basic quality standards. No concerns about the operational feasibility of taking the ACS in 2010 were found. Griffin and Obenski (2001) wrote a summary report on the operational feasibility of the ACS, based on the C2SS. Ethnographic Studies: Three studies were included in this experiment. One study examined the representation of and responses from complex households in the decennial census through ethnographic studies of six race/ethnic groups (Schwede, 2003). A second study examined shared attitudes among those individuals following the “baby boomers, i.e., those born between 1965 and 1975, about civic engagement and community involvement, government in general, and decennial census participation in particular (Crowley, 2003). A third study examined factors that respondents considered when they were asked to provide information about themselves in a variety of modes (Gerber, 2003). This research suggested that the following factors may contribute to decennial noncompliance and undercoverage errors: (1) noncitizenship status or unstable immigration status, (2) respondents not knowing about the decennial census, and (3) increased levels of distrust among respondents toward the government. Evaluations The Census Bureau initially planned to conduct 149 evaluation studies to assess the quality of 2000 census operations. Due to various resource constraints, as well as the overlap of some of the studies with assessments needed to evaluate the quality of the 2000 estimates of net undercoverage, 91 studies were completed. These evaluations were summarized in various topic reports, the subjects of which are listed in Table 1-1. POST HOC ASSESSMENT OF THE 2000 EXPERIMENTS AND EVALUATIONS We have described six experiments that were embedded in the 2000 census. We can now look back at these experiments to see the extent to which they were able to play a role in impacting the design of the 2010 census. In doing that we hope to learn how to improve the selection of experiments in the 2010 census, looking toward the design of the 2020 census. Before continuing, it is important to note that the very basic design of the 2010 census was determined before these 2000 census experiments had been carried out. Therefore, at a fundamental level, the 2000 census experiments were always limited in their impact on key aspects of the basic design of the next census. On the one hand, with the benefit of hindsight, the choice of the general subject matter for these six experiments can be viewed as relatively successful, since many of the basic issues identified for experimentation were relevant to the design of the 2010 census. The 11

OCR for page 7
utility of information from administrative records for census purposes, the advantages and disadvantages of Internet data collection, various aspects of census questionnaire design, and the operational feasibility of the American Community Survey being carried out during a decennial census were issues for which additional information was needed to finalize the 2010 design. On the other hand, the details of these studies also indicate that they could have played a more integral role in the specification of the design for the 2010 census if they had been modified in relatively modest ways. For example, as a test of residence instructions, AQE2000 varied many factors simultaneously so that individual design effects were difficult to separate out. Also, the test of long-form routing instructions was largely irrelevant to a short-form-only census. AREX 2000 focused on the use of administrative records to serve in place of the current census enumeration, whereas examination of the use of administrative records to help with specific operations, such as for targeted improvements in the Master Address File, to assist in late nonresponse follow-up, or to assist with coverage measurement, would have been more useful. The response mode and incentive experiment examined the use of incentives to increase use of the Internet as a mode of response, but they did not examine other ways to potentially facilitate and improve Internet usage. Finally, the Social Security Number, Privacy, and Notification Experiment did not have any direct bearing on the 2010 design. It bears repeating that it is an enormous challenge to anticipate what issues will be prominent in census designs for a census that will not be finalized for at least eight years after the census experiments themselves need to be finalized. Since one goal of this panel study is to help the Census Bureau select useful experiments for the 2010 census, our hope is that, when looking back in 2017, the 2010 census experiments will be seen as very useful in helping to select an effective design for the 2020 census. With respect to the 2000 evaluations, the National Research Council report The 2000 Census: Counting Under Adversity provided an assessment of the utility of these studies, with which we are in agreement. The study group found (National Research Council, 2004b:331-332): Many of the completed evaluations are accounting-type documents rather than full-fledged evaluations. They provide authoritative information on such aspects as number of mail returns by day, complete-count item nonresponse and imputation rates by type of form and data collection mode, and enumerations completed in various types of special operations. . . . This information is valuable but limited. Many reports have no analysis as such, other than simple one-way and two-way tabulations. . . . Almost no reports provide tables or other analyses that look at operations and data quality for geographic areas. . . . 2010 planners need analysis that is explicitly designed to answer important questions for research and testing to improve the 2010 census. . . . Imaginative data analysis [techniques] could yield important findings as well as facilitate effective presentation of results. 12

OCR for page 7
OVERVIEW OF THE 2010 CENSUS While the 2000 census was still under way, the Census Bureau began to develop a framework for the 2010 census. Originally likened to a three-legged stool, this framework was predicated on three major initiatives: 1. The traditional long-form sample—in which roughly one-sixth of census respondents would receive a detailed questionnaire covering social, economic, and demographic characteristics—would be replaced by a continuing household survey throughout the decade, the American Community Survey, thus freeing the 2010 census to be a short-form-only enumeration; 2. Improvements would be made to the Census Bureau’s Master Address File (MAF) and its associated geographic database (the Topologically Integrated Geographic Encoding and Referencing System, or TIGER, database) in order to save field time and costs; and 3. A program of early, integrated planning would be implemented in order to forestall an end-of-decade crunch in finalizing a design for the 2010 census. Reengineering the 2010 Census: Risks and Challenges reviews the early development of the 2010 census plan, noting an immediate adjunct to the basic three-legged plan: the incorporation of new technology in the census process (National Research Council, 2004a). Specifically, the 2010 census plan incorporated the view that handheld computers could be used in several census operations in order to reduce field data collection costs and improve data quality. Following a series of decisions not to adjust the counts from the 2000 census for estimated coverage errors, the Census Bureau also established the basic precept that the 2010 census coverage measurement program would be used primarily to support a feedback loop of census improvement rather than for census adjustment. As the 2010 census plan has developed, major differences between the 2010 plan and its 2000 predecessor—in addition to the broad changes already described—include: • The use of handheld computers by field enumerators has been focused on three major operations: updating the Master Address File during the address canvassing procedure, conducting nonresponse follow-up interviewing, and implementing a new coverage follow-up (CFU) operation. • The coverage follow-up interview is a consolidation and substantial expansion of a telephone follow-up operation used in the 2000 census, which was focused on following up households with count discrepancies and households with more than the six maximum residents allowed on the census form. While detailed plans for this follow-up operation are as yet incomplete, it appears that the CFU in 2010 will also follow up households with evidence of having duplicate enumerations, with people viewed as residents who possibly should have been enumerated elsewhere, and with people viewed as nonresidents who may have been incorrectly omitted from the count of that household. 13

OCR for page 7
• The Local Update of Census Addresses (LUCA) program, which gives local and tribal governments an opportunity to review and suggest additions to the Master Address File from their areas, has been revised to facilitate participation by local governments and to enhance communication between Census Bureau and local officials. • Nonrespondents to the initial questionnaire mailing will be sent a replacement questionnaire to improve mail response. • Households in selected geographic areas will be mailed a bilingual census questionnaire in Spanish and English. • The census questionnaire will include two “coverage probe” questions to encourage correct responses (and to serve as a trigger to inclusion in the CFU operation). • The definitions of group quarters—nonhousehold settings like college dormitories, nursing homes, military barracks, and correctional facilities—have been revised. • Continuing a trend from 2000, the Census Bureau will increasingly rely on outside contractors to carry out several of the processes. THE CPEX PLANNING DOCUMENT This, the panel’s first interim report, provides a review of the current status of the experimentation and evaluation plans of the Census Bureau heading into the 2010 census. As the major input to the panel’s first meeting and our work to date, the Census Bureau provided a list of 52 issues, reprinted as Appendix A, corresponding to component processes of the 2010 census design that were viewed either as potentially capable of improvement or of sufficient concern to warrant a careful assessment of their performance in 2010. The list, divided into the following 11 categories, was provided to us as the set of issues that the Census Bureau judged as possibly benefiting from either experimentation in 2010 or evaluation after the 2010 census has concluded: 1. content, 2. race and Hispanic origin, 3. privacy, 4. language, 5. self-response options, 6. mode effects, 7. special places and group quarters, 8. marketing/publicity, 9. field activities, 10. coverage improvement, and 11. coverage measurement. In addition to the description of the topics themselves, the Census Bureau also provided indications as to whether these topics have a high priority, whether they could potentially save substantial funds in the 2020 census, whether results could conclusively measure the 14

OCR for page 7
effects on census data quality, whether the issue addresses operations that are new since the 2000 census, and whether data will be available to answer the questions posed. This list of topics was a useful start to the panel’s work, but, as discussed more below, it is deficient in some ways, especially since it is not separated into potential experiments or evaluations and does not contain quantitative information on cost or quality implications. Also, such a list of topics needs to be further considered in the context of a general scheme for the 2020 census. GUIDE TO THE REPORT The remainder of this report is structured as follows. Chapter 2 provides initial views on the 2010 census experiments. There is a first section on a general approach to the selection of census experiments, which is followed by the panel’s recommended priorities for topics for experimentation in 2010. Chapter 3 begins with suggestions for the 2010 census evaluations, which is followed by a general approach to census evaluation, and which concludes with considerations regarding a general approach to census research. Chapter 4 presents additional considerations for the 2010 census itself. It begins with technology concerns for 2010, followed by a discussion of the issue of data retention by census contractors. The chapter concludes with a discussion of the benefits of facilitating census enumeration as part of telephone questionnaire assistance. Appendix A provides the Census Bureau’s summaries of suggested research topics for experiments and evaluations in 2010. Appendix B summarizes Internet response options in the 2000 U.S. census and in selected population censuses in other countries. Appendix C presents biographical sketches of panel members and staff. 15

OCR for page 7
TABLE 1-1 Topic Headings, 2010 CPEX Research Proposals and 2000 Census Evaluation Program 2010 CPEX Proposals 2000 Census Evaluation Topic Reports Content Content and data quality Coverage improvement Coverage improvement Address list development Address list development AREX2000 experimenta Administrative records Coverage follow-up Partial: Coverage improvement Residency rules/question development AQE2000 experiment Be Counted Partial: Response rates and behavior analysis General — Coverage measurement Coverage measurement Field activities Automation Partial: Automation of census 2000 processes Training — Quality control Partial: Content and data quality Language Partial: Response rates and behavior analysis Marketing/publicity/advertising/partnerships Partnership and marketing program Mode effects — Privacy research in census 2000b Privacy Race and Hispanic origin Race and ethnicity Self-response options — Special places and group quarters Special places and group quarters — Automation of census 2000 processes — Data capture — Data collection — Data processing Ethnographic studiesc — — Puerto Rico — Response rates and behavior analysis Note: The italics in the entries indicate deviations from the column heading, “2000 Census Evaluation Topic Reports.” Some of the entries were not topic reports but were experiments. Also, some of the operations were part of the 2000 Coverage Improvement report. a Described as partial match because the CPEX proposals under automation are oriented principally at one component (handheld computers). b Privacy was also touched on by the Social Security Number, Privacy Attitudes, and Notification (SPAN) experiment. c The 2000 census included several ethnographic studies; administratively, about half were considered part of the experiments while others were formally designated as evaluations (and were the subject of a topic report). 16