CHAPTER 7
Coverage Measurement

IT IS ALMOST CERTAIN that the resident population of the United States on April 1, 2000, was not exactly equal to 281,421,906, even though that is the total reported by the 2000 census. No decennial census has ever attained a perfect, complete count of the population; the results of a census represent the best effort to count every resident once and only once, but some people are inevitably missed in the count and others are counted multiple times. The possibility of undercount in the census has been a longstanding concern, particularly since the level of undercount has been estimated to vary differentially by racial and ethnic groups in recent censuses. In the 2000 census, follow-up research eventually concluded that the 2000 census may have experienced a net overcount, the first such occurrence in census history. Given the inherent complexity of the decennial census task, it is crucial that the census include programs that permit examination of the accuracy and completeness of the count; development of such a coverage measurement plan remains a major challenge in planning the 2010 census.

In this chapter, we outline our suggestions for the shape of a coverage program in 2010 relative to that used in 2000 (Section 7-A). We then comment on demographic analysis, an alternative coverage measurement methodology that provided a very useful point of comparison in the 2000 census coverage measure-



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 193
Reengineering the 2010 Census: Risks and Challenges CHAPTER 7 Coverage Measurement IT IS ALMOST CERTAIN that the resident population of the United States on April 1, 2000, was not exactly equal to 281,421,906, even though that is the total reported by the 2000 census. No decennial census has ever attained a perfect, complete count of the population; the results of a census represent the best effort to count every resident once and only once, but some people are inevitably missed in the count and others are counted multiple times. The possibility of undercount in the census has been a longstanding concern, particularly since the level of undercount has been estimated to vary differentially by racial and ethnic groups in recent censuses. In the 2000 census, follow-up research eventually concluded that the 2000 census may have experienced a net overcount, the first such occurrence in census history. Given the inherent complexity of the decennial census task, it is crucial that the census include programs that permit examination of the accuracy and completeness of the count; development of such a coverage measurement plan remains a major challenge in planning the 2010 census. In this chapter, we outline our suggestions for the shape of a coverage program in 2010 relative to that used in 2000 (Section 7-A). We then comment on demographic analysis, an alternative coverage measurement methodology that provided a very useful point of comparison in the 2000 census coverage measure-

OCR for page 193
Reengineering the 2010 Census: Risks and Challenges ment program (7–B). Finally, we discuss the use of administrative records, the focus of a major experiment in 2000 (7–C). 7–A THE SHAPE OF COVERAGE MEASUREMENT IN 2010 The quality of census coverage and the possibility of statistically adjusting census totals to reflect coverage gaps developed into the defining issues of the 1990 and, especially, the 2000 censuses. That some people are missed in the census count while others may be multiply counted is virtually inevitable and has never been in dispute, even since the earliest censuses. However, the intensity of the political debate over census coverage, over the differential nature of census undercount by race and other demographic groups, and over the reliability and validity of statistical adjustment grew enormously in the past two censuses, to the point that the 2000 census was conducted under an unprecedented level of oversight and suspicion. The results of 2000 coverage evaluation efforts have not settled the ongoing debate over census adjustment. In the 2000 census cycle, the Census Bureau faced three separate points at which a decision on statistical adjustment had to be rendered: March 2001 for redistricting purposes, October 2001 for federal fund allocation and other purposes, and March 2003 for use as the base for postcensal population estimates. In all three instances, the Bureau opted against adjustment as results of the Accuracy and Coverage Evaluation (ACE) Program—the followup survey used to assess census coverage and derive adjustment factors—showed unanticipated results. In March 2001, concern over the discrepancy between ACE-adjusted census counts and the alternative population count derived through demographic analysis was sufficiently large to deter adjustment; ACE research through October 2001 resolved some conceptual issues and led to a significantly lower estimate of national net undercount in the census, but still left too many unanswered questions for the Bureau to recommend adjustment. By March 2003, Bureau reexamination of the ACE (ACE Revision II, in their terminology) suggested a national net overcount of population, the first such finding in census history (although different racial and demographic groups still experienced significant net undercount at the national level).

OCR for page 193
Reengineering the 2010 Census: Risks and Challenges As Singh and Bell (2003) noted at the panel’s September 2003 meeting, the Census Bureau’s plans for coverage measurement in 2010 will be driven by the following general goals: (1) to produce measures of the components of coverage error, (2) to produce these measures for demographic groups, geographic areas, and key census operations, and (3) to provide measures of net coverage error. However, to our knowledge, the Bureau has not yet made concrete plans for testing improved coverage measurement procedures. Our approach in this report is primarily pragmatic. We believe it is vitally important that the 2010 census include mechanisms that permit in-depth evaluation of how well the census performs in enumerating the population, both as a whole and differentially for population subgroups. Most simply, a program is necessary for the measurement of coverage although it need not be a census “coverage measurement” program as that term has come to be known. However the 2010 coverage measurement program is structured, it is essential that it be addressed early, that it be the subject of research and evaluation throughout the years leading up to 2010, and that it be included in the 2006 proof-of-concept test and the 2008 census dress rehearsal. It is not necessary that the coverage measurement program for the 2010 census follow the exact same structure and script as the 2000 Accuracy and Coverage Evaluation Program. Indeed, in light of the analysis of the Panel to Review the 2000 Census (National Research Council, 2004), repetition of the 2000 ACE in 2010 without substantial improvement would be detrimental; it would likewise be harmful if the 2000 methodology had to be used, as is, as a fall-back position absent research and resolution of a plan in the years preceding 2010. To the extent that coverage measurement in 2010 makes use of a postenumeration survey (PES) combined with dual-systems estimation (DSE)—the primary approach used in the past two censuses—we have made several suggestions in this report that could improve the methodology. These include: further research on matching census records, searching the nation and matching by name and date of birth, as part of the unduplication effort (Section 5-E);

OCR for page 193
Reengineering the 2010 Census: Risks and Challenges inclusion of the group quarters population in the postenumeration survey (and, more generally, reconciliation of group quarters enumeration with housing unit enumeration) (Section 5-B.2); and better definition of census residence rules and better communication of the same to respondents through redesigned questionnaires and CAPI techniques (Section 5-B.3). In addition, the panel hopes that an improved MAF/TIGER will contribute to a reduction in geocoding errors, which proved to be a point of concern in the 2000 ACE. But, beyond those steps, we do not believe it appropriate to delve into the mechanics of the PES-DSE combination in this report nor to offer specific recommendations on how it should or should not be implemented in 2010, given that our active discussion with the Bureau on those possibilities began very late in the panel’s term. In the balance of this chapter, we discuss demographic analysis in Section 7-B. Used as a coverage measurement tool since the 1950 census, demographic analysis remains an important approach that could benefit from some strategies for improvement. Finally, while research on administrative records remains, to a great extent, a topic for experimentation rather than implementation in 2010, such research could feed into coverage evaluation efforts (7–C). 7–B ENHANCING DEMOGRAPHIC ANALYSIS FOR 2010 The success of demographic analysis as a tool for census coverage evaluation depends on access to accurate and highly reliable information. It has been generally assumed that the data used for demographic analysis were of sufficient quality to support highly accurate intercensal annual estimates of the size of the United States population, and that these estimates could be accumulated over the decade to obtain a figure against which the next census enumeration could be benchmarked. Demographic analysis has also been an important tool for assessing the black-white differential undercount. Demographic analysis requires highly accurate data on three components of population change: fertility, mortality, and net

OCR for page 193
Reengineering the 2010 Census: Risks and Challenges migration. In the case of fertility data, the surveys done in the 1960s to determine the completeness of the data since 1930 have generally confirmed the accuracy of these data, both in terms of the numbers of births reported and the characteristics of mothers and offspring. However, alternative assumptions of the completeness of the data were utilized in the application of demographic analysis to the coverage of the 2000 census (Robinson, 2001). For mortality data, while estimates of the numbers of deaths reported can be assumed to be relatively accurate, studies of infant mortality (Hahn, 1992; Hahn et al., 1992) have raised serious questions about the accuracy of reports about the racial background of the deceased. Coroners and medical examiners, for example, are not always able to reliably determine the race of the deceased in the absence of information from the decedent’s family. Immigration and emigration data are undoubtedly the most problematic component for demographic analysis. Immigration estimates consist of two subcomponents: documented and undocumented immigrants. For demographic analysis, the Immigration and Naturalization Service (INS)1 has been an important source of information about United States immigration for documented and undocumented arrivals. While estimates of immigrants are relatively complete when people arrive with the proper documentation, estimates of undocumented immigration are no more than educated guesses based on INS arrests and deportations and on imaginative use of census and household survey data. For the 2002 intercensal estimates, the Census Bureau was able, for the first time, to incorporate data from the 2000 and 2001 Census Supplementary Surveys. These data were corroborated with data from the INS and were found to be reasonably consistent for documented immigrants. However, as in the past, undocumented immigration continues to be treated as a residual category resting on the unsubstantiated assumption that the cov- 1   In March 2003, authority that had been vested in the INS was divided among three bureaus in the newly-formed Department of Homeland Security—the U.S. Citizenship and Immigration Services, the Bureau of Immigration and Customs Enforcement, and the Bureau of Customs and Border Protection.

OCR for page 193
Reengineering the 2010 Census: Risks and Challenges erage of undocumented immigrants is not significantly different from that of immigrants who arrive with proper documentation. In the past, a great deal of research by federal agencies and others has supported the view that most undocumented entries arrive from Mexico and Central and South America. While the majority of undocumented entries may indeed arrive from these regions, an ever larger number may be arriving from other troubled parts of the world such as Asia and Africa. Because of the relatively porous border with Canada, undocumented immigrants from Asia, Africa, and even Europe are almost certain to prefer entering the United States from the north. Another option is to enter the country with a tourist visa and then simply remain beyond the visa’s expiration. Given the United States’ increasingly global ties and the comparatively liberal rules for entry into Canada, it may no longer be reasonable to assume that undocumented immigration from the north is negligible. Research about the extent of undocumented immigration from Canada is very limited, and additional research could be warranted. Undoubtedly, the transfer of portions of INS to the new Department of Homeland Security is likely to result in these issues being given a higher priority than in the past. However, it is not clear how well the size of this population can be estimated even with maximal resources. Demographic analysis is also important because it provides reasonably unbiased national estimates of the number of nativeborn black Americans and because it supports the production of intercensal estimates for subnational areas, though estimates of interstate migration are also needed. ACS data might support substantial enhancement of the current approach to estimating internal migration, which uses tax return data. However, as discussed in Chapter 4, much research is needed to determine how to exploit the ACS data for this purpose. Like National Research Council (2004), we believe that demographic analysis has proved to be a useful independent benchmark—but is not in itself a gold standard—for assessment of census coverage. Particularly if estimates of immigration and emigration can be improved, we believe that it should continue to play a valuable role in coverage measurement in 2010. In addition, we noted earlier that the fundamental research on

OCR for page 193
Reengineering the 2010 Census: Risks and Challenges the completeness of birth registration needs to be updated and that questions have been raised regarding the racial reporting in death data. Accordingly, the demographic analysis program for 2010 would benefit strongly from renewal and revisiting of research on the basic assumptions of the methodology. Accordingly, we endorse a recommendation from that report, with modification (National Research Council, 2004:Rec. 6.2): Recommendation 7.1: The Census Bureau should continue to pursue methods of improving demographic analysis estimates, working in concert with other statistical agencies that use and provide data inputs to the postcensal population estimates. Work should focus especially on improving estimates of net immigration. Attention should also be paid to quantifying and reporting uncertainty in demographic estimates. Updated assessments of the assumptions underlying demographic analysis estimates, including the completeness of birth registration, should also be considered. 7–C ENHANCING ADMINISTRATIVE RECORDS ANALYSIS FOR 2010 For several years, the possibility of a census conducted in part (or even in whole) by use of administrative records—the person-level data maintained by a host of federal government programs—has been the focus of recurring discussions. The potential applicability of administrative records in the census has increased, both as the administrative records databases maintained by the government have become more complete and as the computing capacity to merge and manage multiple lists has become more powerful and more sophisticated. 7–C.1 Administrative Records Experiment in the 2000 Census As part of the program of experiments planned to accompany the 2000 census, the Census Bureau initiated an administrative records experiment, which came to be known as AREX 2000.

OCR for page 193
Reengineering the 2010 Census: Risks and Challenges The experiment examined the possibility of using administrative records for several purposes, including the derivation of population estimates (that is, an administrative records census). Other uses that AREX 2000 was intended to investigate include the use of information for improvement of the Master Address File, help with census unduplication, and refinement of intercensal or postcensal population estimates. The experiment also considered the possible use of administrative records as a resource for imputation, either for households with no report at all or for those with missing data items. Results of the experiment are reported in Bauder and Judson (2003); Berning (2003); Berning and Cook (2003); Heimovitz (2003); and Judson and Bye (2003). AREX 2000 was limited in scope to five county-level sites: the city of Baltimore, Maryland, the surrounding Baltimore County, and Douglas, El Paso, and Jefferson Counties, Colorado. The 2000 census population of the test sites is 2.6 million, in 1.2 million households. AREX 2000 was charged with using administrative records to provide population counts and demographic characteristics for census tracts and blocks in the sites. Though analysis was limited to the test sites, AREX 2000 used as its base a national-level resource: the Census Bureau-compiled Statistical Administrative Records System (StARS). StARS merged and unduplicated records from six major administrative records databases: Internal Revenue Service (IRS) Individual Master File (IMF 1040), IRS Information Returns Master File (IRMF W-2/1099), Department of Housing and Urban Development (HUD) Tenant Rental Assistance Certification System (TRACS) File, Center for Medicare and Medicaid Services (CMS) Medicare Enrollment Database (MEDB) File, Indian Health Services (IHS) Patient Registration System File, and Selective Service System (SSS) Registration File. Demographic data to impute missing values remaining from these sources were derived from the Census Numident file, an

OCR for page 193
Reengineering the 2010 Census: Risks and Challenges edited version of the Social Security Administration’s master file of assigned Social Security numbers (also known as the Numerical Identification, or Numident, file). The specific data assembled to form StARS were from 1999 (the IRS data sources were for tax year 1998). These resources were used as both records of individuals and listings of addresses. The information culled from the files was selected based on its currency and perceived quality; records were also assessed by whether they could be geocoded (that is, whether the addresses could be matched to the TIGER geographic database). With respect to data on individuals, 875 million records were initially available after merging. After unduplication and removal of known deceased individuals and persons residing outside the United States, a file of 257 million individuals was produced (Judson and Bye, 2003:11). With respect to addresses, almost 800 million were available at the start, and after unduplication and removal of business addresses (and other operations), approximately 147 million addresses were produced, of which 73 percent were able to be geocoded (Judson and Bye, 2003:15). Two methods were examined for taking an administrative records census, referred to as “top-down” and “bottom-up.” The top-down approach was a raw administrative-records-only census: tallying the number of people on the StARS (merged and unduplicated file) with addresses geocoded to the test site locations. The bottom-up approach matched the StARS address list with the Decennial Master Address File (DMAF) in order to simulate the mailout/mailback and nonresponse follow-up phases of the census.2 Data from StARS records with addresses matching to the DMAF are thought of as the mailout/mailback piece; to simulate nonresponse follow-up, 2000 census counts for DMAF addresses not found in StARS were added to the “mailout/mailback” administrative records count.3 2   The DMAF is the version of the MAF that is extracted prior to the census and used to print mailing address labels and monitor mail response. 3   An administrative records census with a bottom-up design would typically have field follow-up for DMAF addresses that are not found in the administrative records database. In lieu of actually doing field follow-up as part of AREX 2000 (and incurring substantial costs as a result), 2000 census counts were used for the addresses that would have been designated for field follow-up.

OCR for page 193
Reengineering the 2010 Census: Risks and Challenges AREX 2000 had important limitations, which are acknowledged by the Bureau in the reports of the experiment. First, AREX 2000 used a version of StARS created using 1998 and 1999 data, creating a time gap relative to the target census reference data of April 1, 2000. Second, additional structure in the experimental design of AREX 2000 could have provided more information concerning the value of various components of the AREX 2000 operation. Specifically, evaluation of the choice of “best” address (as well as other field and clerical operations) could have been carried out using a more elaborate design. The panel’s first interim report made suggestions that were not included in the AREX 2000 design, including (1) integration of AREX 2000 with the Accuracy and Coverage Evaluation so that it could be determined which households and individuals ACE tended to miss, and (2) field follow-up to help evaluate the quality of the merged administrative records list. Another limitation is that, for the purposes of the experiment, the Bureau elected not to consider administrative records from commercial sources or to try to draw from state and local government records. Nevertheless, even with the various limitations, AREX 2000 was a valuable experiment. It demonstrated the feasibility of merge and unduplication operations that had not been evaluated previously. AREX 2000 also provided extremely useful information on the value of administrative records for use in assisting nonresponse follow-up. In evaluating the experiment, the Census Bureau concluded that the top-down approach to an administrative records census experienced an 8 percent undercount across the test sites, a substantial figure. However, this undercount was cut to 1 percent by the bottom-up procedure (Judson and Bye, 2003). These undercounts were carefully examined by various demographic characteristics and at different levels of geographic aggregation. In addition, logistic regression models were used to help predict for which types of households administrative records data might be useful for providing whole-household imputations. 7–C.2 Administrative Records for 2010 AREX 2000 demonstrated the potential for the use of administrative records in the census process. Exactly how great that

OCR for page 193
Reengineering the 2010 Census: Risks and Challenges potential is—or, put another way, how close these methods are to actual implementation in the census—is less clear. At the very least, administrative records research should be pursued for further and fuller experimentation as part of the 2010 census. That said, the possibility of a substantial role—for instance, use of administrative records for help in nonresponse follow-up, imputation, or targeting MAF improvements—ought not be rejected out of hand. Much work would be needed to develop and implement any of these ideas in 2010; should the decision be made to try to use administrative records in the census, it would be important to focus on one or two applications at most and to include an evaluation of those applications in the 2006 census test. 7–C.3 Other Possibilities: Megalist and Reverse Record Check From the conceptual standpoint, at least two other possibilities could be posited for more extensive use of administrative records in the census context. These are sufficiently promising as to warrant additional research and experimentation, though they are admittedly more likely to be part of a discussion of methodology in 2020 rather than 2010. The first possibility is megalist, which is the concept of continuing to develop merged lists of administrative records as an independent listing of the population (National Research Council, 1985; Ericksen and Kadane, 1983). Thus, the administrative list could be used as the second component of dual-systems estimation (filling the role of the postenumeration survey in the 2000 Accuracy and Coverage Evaluation, for instance) or as a third component in triple-systems estimation. The advantages of a megalist over a postenumeration survey are the cost savings on field data collection and the possibility of improved representation of hard-to-enumerate populations (since relevant administrative lists could be used, e.g., from welfare program participants or the Indian Health Service records used in the current StARS). The primary advantage of triple-systems estimation is a reduced reliance on the independence assumption used in dualsystems estimation. The disadvantages of the megalist approach include questions of the representativeness of the merged lists, the quality of residence location information, and the availability

OCR for page 193
Reengineering the 2010 Census: Risks and Challenges of reliable information for matching the separate lists (e.g., date of birth). This latter concern is especially important for triple-systems estimation, where the amount of matching is tripled in comparison to dual-systems estimation. The second possibility is reverse record check, the primary method used by Statistics Canada in evaluating the coverage of the Canadian census (Gosselin, 1980). In this technique, samples of births, immigrants, those counted in the most recent census, and those missed in the most recent census (which is roughly provided by the previous implementation of this program) are each traced to their current address to arrive at a target count for each area to compare against the census counts. Since the Canadian census is taken every 5 years, tracing addresses forward in time requires finding people after a lag of only 5 years. On the other hand, for the United States census, tracing would have to extend over a lag of 10 years. This crucial difference in the application of this technique to the United States census was tested by the Census Bureau in 1984 in the Forward Trace Study (Hogan, 1983), in which it was discovered that tracing over 10 years was not feasible. However, administrative lists are of higher quality than in 1984, and it may be that reverse record check should be reevaluated as a possibility in the United States census.