–1–
Introduction

The Census of Population and Housing is carried out in the United States every 10 years, and the next census is scheduled to begin its mailout-mailback operations in March 2010. For at least the past 50 years, each decennial census has been accompanied by a research program of evaluation or experimentation. The Census Bureau typically refers to a census “experiment” as a study involving field data collection—typically carried out simultaneously with the decennial census itself—in which alternatives to census processes currently in use are assessed for a subset of the population. By comparison, census “evaluations” are usually post hoc analyses of data collected as part of the decennial census process to determine whether individual steps in the census operated as expected. Collectively, census experiments and evaluations are designed to inform the Census Bureau as to the quality of the processes and results of the census, as well as to help plan for modifications and innovations that will improve the (cost) efficiency and accuracy of the next census. The Census Bureau is currently developing a set of evaluations and experiments to accompany the 2010 census, which the Bureau refers to as the 2010 Census Program for Evaluations and Experiments or CPEX.

These two activities of the more general census research program are concentrated during the conduct of the census itself, but census-related research activities continue throughout the decade. Traditionally, the Census Bureau’s intercensal research has been focused on a series of census tests, some of which are better described as “test censuses” because they are conducted in specific geographic areas and can include fieldwork (e.g., in-person follow-up for nonresponse) as well as contact through the mail or other



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 213
–1– Introduction The Census of Population and Housing is carried out in the United States every 10 years, and the next census is scheduled to begin its mailout- mailback operations in March 2010. For at least the past 50 years, each decennial census has been accompanied by a research program of evalua- tion or experimentation. The Census Bureau typically refers to a census “experiment” as a study involving field data collection—typically carried out simultaneously with the decennial census itself—in which alternatives to census processes currently in use are assessed for a subset of the popula- tion. By comparison, census “evaluations” are usually post hoc analyses of data collected as part of the decennial census process to determine whether individual steps in the census operated as expected. Collectively, census ex- periments and evaluations are designed to inform the Census Bureau as to the quality of the processes and results of the census, as well as to help plan for modifications and innovations that will improve the (cost) efficiency and accuracy of the next census. The Census Bureau is currently developing a set of evaluations and experiments to accompany the 2010 census, which the Bureau refers to as the 2010 Census Program for Evaluations and Exper- iments or CPEX. These two activities of the more general census research program are concentrated during the conduct of the census itself, but census-related re- search activities continue throughout the decade. Traditionally, the Census Bureau’s intercensal research has been focused on a series of census tests, some of which are better described as “test censuses” because they are con- ducted in specific geographic areas and can include fieldwork (e.g., in-person follow-up for nonresponse) as well as contact through the mail or other 213

OCR for page 213
214 INTERIM REPORT means. The sequence of tests usually culminates in a dress rehearsal two years prior to the decennial census. In addition to the test censuses, the Census Bureau has also conducted some smaller scale experimental data col- lections during the intercensal period. 1–A CHARGE TO THE PANEL As it began to design its CPEX program for 2010, the Census Bureau re- quested that the Committee on National Statistics of the National Academies convene the Panel on the Design of the 2010 Census Program of Evaluations and Experiments. The panel’s charge is to: . . . consider priorities for evaluation and experimentation in the 2010 census. [The panel] will also consider the design and documentation of the Master Address File and operational databases to facilitate research and evaluation, the design of experiments to embed in the 2010 cen- sus, the design of evaluations of the 2010 census processes, and what can be learned from the pre-2010 testing that was conducted in 2003– 2006 to enhance the testing to be conducted in 2012–2016 to support census planning for 2020. Topic areas for research, evaluation, and testing that would come within the panel’s scope include questionnaire design, address updating, nonresponse follow-up, coverage follow-up, unduplication of housing units and residents, editing and imputation procedures, and other census operations. Evaluations of data quality would also be within scope. . . . More succinctly, the Census Bureau requests that the panel: • Review proposed topics for evaluations and experiments; • Assess the completeness and relevance of the proposed topics for eval- uation and experimentation; • Suggest additional research topics and questions; • Recommend priorities; • Review and comment on methods for conducting evaluations and ex- periments; and • Consider what can be learned from the 2010 testing cycle to better plan research for 2020. The panel is charged with evaluating the 2010 census research program, primarily in setting the stage for the 2020 census. As the first task, the panel was asked to review an initial list of research topics compiled by the Census Bureau, with an eye toward identifying priorities for specific experiments and evaluations in 2010. This first interim report by the panel uses the Bu- reau’s initial suggestions for consideration as a basis for commentary on the overall shape of the research program surrounding the 2010 census and lead- ing up to the 2020 census. It is specifically the goal of this report to suggest

OCR for page 213
INTRODUCTION 215 priorities for the experiments to be conducted in line with the 2010 cen- sus because they are the most time-sensitive. To some observers, a two-year time span between now and the fielding of the 2010 census may seem like a long time; in the context of planning an effort as complex as the decen- nial census, however, it is actually quite fleeting. Experimental treatments must be specified, questionnaires must be tested and approved, and systems must be developed and integrated with standard census processes—all at the same time that the Bureau is engaged in an extensive dress rehearsal and fi- nal preparations for what has long been the federal government’s largest and most complex non-military operation. Accordingly, the Census Bureau plans to identify topics for census experiments to be finalized by winter 2007 and to have more detailed plans in place in summer 2008; this report is an early step in that effort. Although this report is primarily about priorities for experiments, we also discuss the evaluation component of the CPEX. This is because even the basic possibilities for specific evaluations depend critically on the data that are collected during the conduct of the census itself. Hence, we offer com- ments about the need to finalize plans for 2010 data collection—whether in house by the Census Bureau or through its technical contractors—in order to facilitate a rich and useful evaluation program. We will continue to study the CPEX program over the next few years, and we expect to issue at least one more report; these subsequent reports will respond to the Bureau’s evolving development of the CPEX plan as well as provide more detailed guidance for the conduct of specific evaluations and experiments. 1–B BACKGROUND: EXPERIMENTS AND EVALUATIONS IN THE 2000 CENSUS As context for the discussion that follows and to get a sense of the scope of CPEX, it is useful to briefly review the experiments and evaluations of the previous census. The results of the full Census 2000 Testing, Experimenta- tion, and Evaluation Program are summarized by Abramson (2004). 1–B.1 Experiments The Census Bureau carried out five experiments in conjunction with the 2000 census. Several ethnographic studies were also conducted during the 2000 census; about half of these were considered to be part of the formal evaluation program, whereas the others were designated as a sixth experi- ment. • Census 2000 Alternative Questionnaire Experiment (AQE2000): AQE2000 comprised three experiments for households in the mailout-

OCR for page 213
216 INTERIM REPORT mailback universe of the 2000 census. The skip instruction experiment examined the effectiveness of different methods for guiding respon- dents through an alternative long-form questionnaire with skip pat- terns. The residence instructions experiment tested various methods (format, presentation, and wording of instructions) for representing the decennial census residence rules on the questionnaire. The hope was to improve within-household coverage by modifying the roster in- structions. Finally, the race and Hispanic origin experiment compared the 1990 race and Hispanic origin questions with the questions on the Census 2000 short form, specifically assessing the effect of permitting the reporting of more than one race and reversing the sequence of the race and Hispanic origin items. This experiment is summarized by Martin et al. (2003). • Administrative Records Census 2000 Experiment (AREX 2000): AREX 2000 was designed to assess the value of administrative records data in conducting an administrative records census. As a by-product, it also provided useful information as to the value of administrative records in carrying out or assisting in various applications in support of conventional decennial census processes. AREX 2000 used admin- istrative records to provide information on household counts, date of birth, race, Hispanic origin, and sex, linked to a corresponding block code. The test was carried out in five counties in two sites, Baltimore City and Baltimore County, Maryland, and Douglas, El Paso, and Jef- ferson counties in Colorado, with approximately 1 million housing units and a population of approximately 2 million. The population coverage for the more thorough of the schemes tested was between 96 and 102 percent relative to the Census 2000 counts for the five test site counties. However, the AREX 2000 and the census counted the same number of people in a housing unit only 51.1 percent of the time. They differed by at most one person only 79.4 percent of the time. The differences between the administrative records–based counts and the census counts were primarily attributed to errors in ad- dress linkage and typical deficiencies in administrative records (missed children, lack of representation of special populations, and deficien- cies resulting from the time gap between the administrative records extracts and Census Day). Another important finding was that ad- ministrative records are not currently a good source of data for race and Hispanic origin, and the models used to impute race and Hispanic origin were not sufficient to correct the deficiencies in the data. The experiment is summarized by Bye and Judson (2004). • Social Security Number, Privacy Attitudes, and Notification Experi-

OCR for page 213
INTRODUCTION 217 ment (SPAN): This experiment assessed the public’s attitudes regard- ing the census and its uses, trust and privacy issues, the Census Bu- reau’s confidentiality practices, possible data sharing across federal agencies, and the willingness of individuals to provide their Social Se- curity number on the decennial census questionnaire. In addition, the public’s attitude toward the use of administrative records in taking the census was also assessed. The experiment is described in detail by Lar- wood and Trentham (2004). • Response Mode and Incentive Experiment (RMIE): The RMIE investi- gated the impact of three computer-assisted data collection techniques: computer-assisted telephone interviewing (CATI), the Internet, and in- teractive voice response, on the response rate and quality of the data collected. The households in six panels were given the choice of pro- viding their data via the usual paper forms or by one of these alternate modes. Half of the panels were offered an incentive—a telephone call- ing card good for 30 minutes of calls—for using the alternate response mode. In addition, the experiment included a nonresponse compo- nent designed to assess the effects on response of an incentive to use alternative response mode options among a sample of census house- holds that failed to return their census forms by April 26, 2000. This was to test the effect of these factors on a group representing those who would be difficult to enumerate. A final component of this exper- iment involved interviewing households assigned to the Internet mode who opted to complete the traditional paper census form to determine why these households did not use the Internet. One of the findings was that the Internet provided relatively high-quality data. However, among respondents who were aware of the Internet option, 35 percent reported that they believed the paper census form would be easier to complete. Other reasons for not using the Internet include no access to a computer, concerns about privacy, “forgot the Internet was an op- tion,” and insufficient knowledge of the Internet. The incentive did not increase response but instead redirected response to the alternate modes. The CATI option seemed to be preferred over the other two alternate modes. Caspar (2004) summarizes the experiment’s results. • Census 2000 Supplementary Survey (C2SS): By 1999, the basic notion that the new American Community Survey (ACS) would take the role of the traditional census long- form sample had been established (this is discussed in more detail in the next section). ACS testing had grown to include fielding in about 30 test sites (counties), with full-scale imple- mentation planned for the 2000-2010 intercensal period. Hence, the Census Bureau was interested in some assessment of the operational feasibility of conducting a large-scale ACS at the same time as a de-

OCR for page 213
218 INTERIM REPORT cennial census. Formally an experiment in the 2000 census program, the C2SS escalated ACS data collection to include more than one-third of all counties in the United States; this step-up in collection—while well short of full-scale implementation—offered a chance to compare ACS estimates with those from the 2000 census. Operational feasibil- ity was defined as the C2SS tasks being executed on time and within budget with the collected data meeting basic quality standards. No concerns about the operational feasibility of taking the ACS in 2010 were found. Griffin and Obenski (2001) wrote a summary report on the operational feasibility of the ACS, based on the C2SS. • Ethnographic Studies: Three studies were included in this experiment. One study examined the representation of and responses from com- plex households in the decennial census through ethnographic stud- ies of six race/ethnic groups (Schwede, 2003). A second study ex- amined shared attitudes among those individuals following the “baby boomers,” i.e., those born between 1965 and 1975, about civic en- gagement and community involvement, government in general, and decennial census participation in particular (Crowley, 2003). A third study examined factors that respondents considered when they were asked to provide information about themselves in a variety of modes (Gerber, 2003). This research suggested that the following factors may contribute to decennial noncompliance and undercoverage errors: (1) noncitizenship status or unstable immigration status, (2) respondents not knowing about the decennial census, and (3) increased levels of distrust among respondents toward the government. 1–B.2 Evaluations The Census Bureau initially planned to conduct 149 evaluation studies to assess the quality of 2000 census operations. Due to various resource constraints, as well as the overlap of some of the studies with assessments needed to evaluate the quality of the 2000 estimates of net undercoverage, 91 studies were completed. These evaluations were summarized in various topic reports, the subjects of which are listed in Table 1-1. 1–C POST HOC ASSESSMENT OF THE 2000 EXPERIMENTS AND EVALUATIONS We have described six experiments that were embedded in the 2000 cen- sus. We can now look back at these experiments to see the extent to which they were able to play a role in impacting the design of the 2010 census. In doing that we hope to learn how to improve the selection of experiments in

OCR for page 213
INTRODUCTION 219 Table 1-1 Topic Headings, 2010 CPEX Research Proposals and 2000 Census Evaluation Program 2010 CPEX Proposals 2000 Census Evaluation Topic Reports Content Content and data quality Coverage improvement Coverage improvement Address list development Address list development AREX2000 experimenta Administrative records Partial: Coverage improvement Coverage follow-up AQE2000 experiment Residency rules/question development Partial: Response rates and behavior analysis Be Counted General — Coverage measurement Coverage measurement Field activities Partial: Automation of census 2000 processes Automation Training — Partial: Content and data quality Quality control Partial: Response rates and behavior analysis Language Marketing/publicity/advertising/partnerships Partnership and marketing program Mode effects — Privacy research in census 2000b Privacy Race and Hispanic origin Race and ethnicity Self-response options — Special places and group quarters Special places and group quarters — Automation of census 2000 processes — Data capture — Data collection — Data processing Ethnographic studiesc — — Puerto Rico — Response rates and behavior analysis NOTE: The italics in the entries indicate deviations from the column heading, “2000 Census Evaluation Topic Reports.” Some of the entries were not topic reports but were experiments. Also, some of the operations were part of the 2000 Coverage Improvement report. a Described as partial match because the CPEX proposals under automation are oriented prin- cipally at one component (handheld computers). b Privacy was also touched on by the Social Security Number, Privacy Attitudes, and Notifica- tion (SPAN) experiment. c The 2000 census included several ethnographic studies; administratively, about half were considered part of the experiments while others were formally designated as evaluations (and were the subject of a topic report).

OCR for page 213
220 INTERIM REPORT the 2010 census, looking toward the design of the 2020 census. Before con- tinuing, it is important to note that the very basic design of the 2010 census was determined before these 2000 census experiments had been carried out. Therefore, at a fundamental level, the 2000 census experiments were always limited in their impact on key aspects of the basic design of the next census. On the one hand, with the benefit of hindsight, the choice of the general subject matter for these six experiments can be viewed as relatively success- ful, since many of the basic issues identified for experimentation were rele- vant to the design of the 2010 census. The utility of information from ad- ministrative records for census purposes, the advantages and disadvantages of Internet data collection, various aspects of census questionnaire design, and the operational feasibility of the American Community Survey being carried out during a decennial census were issues for which additional infor- mation was needed to finalize the 2010 design. On the other hand, the details of these studies also indicate that they could have played a more integral role in the specification of the design for the 2010 census if they had been modified in relatively modest ways. For example, as a test of residence instructions, AQE2000 varied many factors simultaneously so that individual design effects were difficult to separate out. Also, the test of long-form routing instructions was largely irrelevant to a short-form-only census. AREX 2000 focused on the use of administrative records to serve in place of the current census enumeration, whereas exami- nation of the use of administrative records to help with specific operations, such as for targeted improvements in the Master Address File, to assist in late nonresponse follow-up, or to assist with coverage measurement, would have been more useful. The response mode and incentive experiment examined the use of incentives to increase use of the Internet as a mode of response, but they did not examine other ways to potentially facilitate and improve In- ternet usage. Finally, the Social Security Number, Privacy, and Notification Experiment did not have any direct bearing on the 2010 design. It bears repeating that it is an enormous challenge to anticipate what issues will be prominent in census designs for a census that will not be final- ized for at least eight years after the census experiments themselves need to be finalized. Since one goal of this panel study is to help the Census Bureau select useful experiments for the 2010 census, our hope is that, when look- ing back in 2017, the 2010 census experiments will be seen as very useful in helping to select an effective design for the 2020 census. With respect to the 2000 evaluations, the National Research Council report The 2000 Census: Counting Under Adversity provided an assessment of the utility of these studies, with which we are in agreement. The study group found (National Research Council, 2004a:331–332): Many of the completed evaluations are accounting-type documents rather than full-fledged evaluations. They provide authoritative infor-

OCR for page 213
INTRODUCTION 221 mation on such aspects as number of mail returns by day, complete- count item nonresponse and imputation rates by type of form and data collection mode, and enumerations completed in various types of spe- cial operations. . . . This information is valuable but limited. Many re- ports have no analysis as such, other than simple one-way and two-way tabulations. . . . Almost no reports provide tables or other analyses that look at operations and data quality for geographic areas. . . . 2010 plan- ners need analysis that is explicitly designed to answer important ques- tions for research and testing to improve the 2010 census. . . . Imagina- tive data analysis [techniques] could yield important findings as well as facilitate effective presentation of results. 1–D OVERVIEW OF THE 2010 CENSUS While the 2000 census was still under way, the Census Bureau began to develop a framework for the 2010 census. Originally likened to a three- legged stool, this framework was predicated on three major initiatives: • The traditional long-form sample—in which roughly one-sixth of cen- sus respondents would receive a detailed questionnaire covering so- cial, economic, and demographic characteristics—would be replaced by a continuing household survey throughout the decade, the Amer- ican Community Survey, thus freeing the 2010 census to be a short- form-only enumeration; • Improvements would be made to the Census Bureau’s Master Address File (MAF) and its associated geographic database (the Topologically Integrated Geographic Encoding and Referencing System, or TIGER, database) in order to save field time and costs; and • A program of early, integrated planning would be implemented in or- der to forestall an end-of-decade crunch in finalizing a design for the 2010 census. Reengineering the 2010 Census: Risks and Challenges reviews the early development of the 2010 census plan, noting an immediate adjunct to the basic three-legged plan: the incorporation of new technology in the census process (National Research Council, 2004b). Specifically, the 2010 census plan incorporated the view that handheld computers could be used in several census operations in order to reduce field data collection costs and improve data quality. Following a series of decisions not to adjust the counts from the 2000 census for estimated coverage errors, the Census Bureau also estab- lished the basic precept that the 2010 census coverage measurement program would be used primarily to support a feedback loop of census improvement rather than for census adjustment.

OCR for page 213
222 INTERIM REPORT As the 2010 census plan has developed, major differences between the 2010 plan and its 2000 predecessor—in addition to the broad changes al- ready described—include: • The use of handheld computers by field enumerators has been focused on three major operations: updating the Master Address File during the address canvassing procedure, conducting nonresponse follow-up interviewing, and implementing a new coverage follow-up (CFU) op- eration. • The coverage follow-up interview is a consolidation and substantial expansion of a telephone follow-up operation used in the 2000 cen- sus, which was focused on following up households with count dis- crepancies and households with more than the six maximum residents allowed on the census form. While detailed plans for this follow-up operation are as yet incomplete, it appears that the CFU in 2010 will also follow up households with evidence of having duplicate enumer- ations, with people viewed as residents who possibly should have been enumerated elsewhere, and with people viewed as nonresidents who may have been incorrectly omitted from the count of that household. • The Local Update of Census Addresses (LUCA) program, which gives local and tribal governments an opportunity to review and suggest ad- ditions to the Master Address File from their areas, has been revised to facilitate participation by local governments and to enhance commu- nication between Census Bureau and local officials. • Nonrespondents to the initial questionnaire mailing will be sent a re- placement questionnaire to improve mail response. • Households in selected geographic areas will be mailed a bilingual cen- sus questionnaire in Spanish and English. • The census questionnaire will include two “coverage probe” questions to encourage correct responses (and to serve as a trigger to inclusion in the CFU operation). • The definitions of group quarters—nonhousehold settings like col- lege dormitories, nursing homes, military barracks, and correctional facilities—have been revised. • Continuing a trend from 2000, the Census Bureau will increasingly rely on outside contractors to carry out several of the processes. 1–E THE CPEX PLANNING DOCUMENT This, the panel’s first interim report, provides a review of the current sta- tus of the experimentation and evaluation plans of the Census Bureau head- ing into the 2010 census. As the major input to the panel’s first meeting and

OCR for page 213
INTRODUCTION 223 our work to date, the Census Bureau provided a list of 52 issues, reprinted as Appendix A, corresponding to component processes of the 2010 census design that were viewed either as potentially capable of improvement or of sufficient concern to warrant a careful assessment of their performance in 2010. The list, divided into the following 11 categories, was provided to us as the set of issues that the Census Bureau judged as possibly benefiting from either experimentation in 2010 or evaluation after the 2010 census has concluded: • content, • race and Hispanic origin, • privacy, • language, • self-response options, • mode effects, • special places and group quarters, • marketing/publicity, • field activities, • coverage improvement, and • coverage measurement. In addition to the description of the topics themselves, the Census Bureau also provided indications as to whether these topics have a high priority, whether they could potentially save substantial funds in the 2020 census, whether results could conclusively measure the effects on census data quality, whether the issue addresses operations that are new since the 2000 census, and whether data will be available to answer the questions posed. This list of topics was a useful start to the panel’s work, but, as discussed more below, it is deficient in some ways, especially since it is not separated into potential experiments or evaluations and does not contain quantitative information on cost or quality implications. Also, such a list of topics needs to be further considered in the context of a general scheme for the 2020 census. 1–F GUIDE TO THE REPORT The remainder of this report is structured as follows. Chapter 2 pro- vides initial views on the 2010 census experiments. There is a first section on a general approach to the selection of census experiments, which is fol- lowed by the panel’s recommended priorities for topics for experimentation in 2010. Chapter 3 begins with suggestions for the 2010 census evaluations,

OCR for page 213
224 INTERIM REPORT which is followed by a general approach to census evaluation, and which concludes with considerations regarding a general approach to census re- search. Chapter 4 presents additional considerations for the 2010 census itself. It begins with technology concerns for 2010, followed by a discus- sion of the issue of data retention by census contractors. The chapter con- cludes with a discussion of the benefits of facilitating census enumeration as part of telephone questionnaire assistance. Appendix A provides the Cen- sus Bureau’s summaries of suggested research topics for experiments and evaluations in 2010. Appendix B summarizes Internet response options in the 2000 U.S. census and in selected population censuses in other countries. Appendix III presents biographical sketches of panel members and staff.