National Academies Press: OpenBook

Coverage Measurement in the 2010 Census (2009)

Chapter: 4 Technical Issues

« Previous: 3 Plans for the 2010 Census
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 81
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 82
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 83
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 84
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 85
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 86
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 87
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 88
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 89
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 90
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 91
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 92
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 93
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 94
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 95
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 96
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 97
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 98
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 99
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 100
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 101
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 102
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 103
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 104
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 105
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 106
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 107
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 108
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 109
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 110
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 111
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 112
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 113
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 114
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 115
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 116
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 117
Suggested Citation:"4 Technical Issues." National Research Council. 2009. Coverage Measurement in the 2010 Census. Washington, DC: The National Academies Press. doi: 10.17226/12524.
×
Page 118

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

4 Technical Issues This chapter discusses several issues related to the proposed census coverage measurement (CCM) program for 2010: the sample design for the CCM postenumeration survey (PES), use of logistic regression models, missing data in new coverage error models, matching cases with minimal information, and demographic analysis. On several of these topics the panel offers recommendations for the Census Bureau. SAMPLE DESIGN FOR CENSUS COVERAGE MEASUREMENT The Census Bureau is planning a CCM PES sample of 300,000 hous­ ing units, with primary sampling units composed of block clusters (for details, see Fenstermaker, 2005). An important question concerning the census coverage measurement program in 2010 is to what extent can and should the new goal of process improvement be incorporated into the design of the CCM PES. For purposes of CCM design, the United States will be divided into 3.7 million block clusters, and the CCM will select about 10,000 of these, each averaging roughly 30 housing units (for a total of 300,000 housing units). The Census Bureau will use an initial stratification of the 3.7 mil­ lion block clusters into four types: (a) small, with between 0 and 2 hous­ ing units (as determined by the Census Bureau’s Master Address File in 2009), (b) medium, with between 3 and 79 housing units, (c) large, with more than 80 housing units, and (d) block clusters of groups of American Indians on reservations. The current proposed CCM design will allocate 81

82 COVERAGE MEASUREMENT IN THE 2010 CENSUS a minimum of 1,800 housing units from about 60 medium and large block clusters per state (3,000 block clusters of the 10,000), with the remainder allocated proportionate to state population size. Also, Hawaii is allocated a minimum of 4,500 housing units in the CCM sample (roughly 150 block clusters), and 10,000 housing units (roughly 330 block clusters) are selected of American Indians living on reservations, which are allocated proportionally to the number of American Indians living on reservations in each state. Once the 10,000 block clusters for the CCM are identified, they will be independently listed to determine how many housing units are actually present (since the MAF does not provide a perfect count and also because the MAF will be slightly dated). In particular, for small block clusters, this independent listing will find many of them to have more than two housing units. If the number of housing units for small block clusters is found to be more than 10, current plans are to choose those block clusters into the CCM sample with certainty. Otherwise, the remaining small block clusters will be subsampled. (Plans are to subsample small block clusters with between none and two housing units at the rate of 0.1, those with between three and five housing at the rate of 0.25, and those with between six and nine housing units at the rate of 0.45.) Finally, regarding substate allocations of block clusters, while plans are currently not final, the Census Bureau is likely to include some modest degree of oversampling of block clusters in areas that have a large fraction of people that rent their residences and possibly in areas that have a large fraction of minority residents. The general argument in support of the state allocations for the 2010 CCM PES is that they mimic those for 2000, since the Census Bureau was generally satisfied with the 2000 design of the Accuracy and Coverage Evaluation (A.C.E.) Program in terms of the variance of estimates of net undercoverage for poststrata. (The Census Bureau has no specific variance requirements for the 2010 CCM estimates, because production of adjusted counts is not anticipated.) With respect to substate allocations, the Census Bureau is concerned with increased variances and so intends to refrain from more than a modest amount of oversampling. The Census Bureau examined some alternative specifications for the design of the CCM PES to see if they might have advantages, using simulation studies of both the quality of the resulting net coverage error e ­ stimates and the quality of estimates of the number of omissions and erroneous enumerations at the national level and for 64 poststrata (for details, see Fenstermaker, 2005, 2006). Initially, four designs were exam­ ined: (1) the design described above—i.e., allocations proportional to total state population, with a minimum of 60 block clusters per state, with Hawaii allocated at least 150 block clusters; (2) as (1) except with

TECHNICAL ISSUES 83 Hawaii allocated at least 60 block clusters; (3) allocations to the four cen­ sus regions to minimize the variance of estimates of erroneous enumera­ tions, and within regions, allocations are proportional to state size; and (4) half of the sample is allocated proportional to the number of housing units within update/leave areas and half is allocated proportional to each state’s number of housing units. Through use of simulations, for each design and resulting set of PES samples across simulation replications, national estimates were computed of the rate of erroneous enumerations (and the rates of erroneous enumerations from mail returns, from nonre­ sponse ­follow-up, and from coverage follow-up), the nonmatch rate, the omission rate, and the net error rate. National estimates of the population were also computed, along with their standard errors. The same analysis was done at the poststratum level. One hundred replications were used for the simulation study. The results supported retention of the design that closely approximated the 2000 A.C.E. design (described above). A subsequent analysis added an additional six proposed sample designs for analysis. The panel supports the overall sample size of 300,000 housing units, which was also endorsed, as part of Recommendation 6.1, by the Panel to Review the 2000 Census (National Research Council, 2004b). Such a design would produce net coverage estimates of similar precision to those of the 2000 A.C.E. The adequacy of the CCM sample size is somewhat supported by the adequacy of the A.C.E. sample size, though the objec­ tives of these surveys have changed and therefore arguments used to support the A.C.E. sample size may no longer be fully relevant. However, such a position is necessary given the current lack of experience estimat­ ing the components of census coverage error. Aside from sample size, the selection of a sample design for the CCM in 2010 will involve addressing related but somewhat competing goals, given that there are two overall objectives of the coverage measurement program for 2010. First, there is the primary objective—the measurement and analytic study of components of census coverage errors. Second, there is still the need to be able to measure net coverage error for at least three reasons: (1) to estimate the number of census omissions, (2) to serve the many users who remain interested in assessments of net error at least for states and major demographic groups, and (3) to facilitate comparison with the quality of the 2000 census. To address the first general goal, one would like to target problematic domains—determined using 2000 census data or data from the American Community Survey (ACS) for which there is predicted to be a high fre­   These are areas in which the enumerator updates the address list, and, at the same time, drops off a questionnaire to be filled out and returned by mail.

84 COVERAGE MEASUREMENT IN THE 2010 CENSUS quency of various types of census coverage error. (In an optimal design for any individual component, the sampling rate would be proportional to the stratum standard deviation, which is likely to be higher in strata where the particular coverage error is greater.) However, one has to be careful because one also has to have a facility for discovering any u ­ nanticipated problems that might appear in areas that were relatively easy to count in 2000. Each census seems to raise relatively novel sources of census coverage error, and at the same time, each census seems to have areas that were hard to count a decade earlier and subsequently are relatively easy to count. Yet the goal of producing high-quality estimates of net coverage error for all states and for all major demographic groups calls for a design that is somewhat less targeted. As with estimation of components of error, the most efficient design for estimation of net coverage error would over­ sample areas with high rates of omissions or erroneous enumerations. This then allows reducing the sampling weights associated with indi­ vidual blocks expected to exhibit the most variance in the two compo­ nents of net error. However, it is especially critical for net error estimation to avoid extreme undersampling in any areas because large sampling weights will quickly inflate variances if associated with blocks having more problems than anticipated. Another way to look at the situation is that there is a modest tension between the need for cross-U.S. reports on net coverage error, and the need for specific analytic studies on possibly problematic processes. So if one had a list of potentially worrisome places where census processes are likely to enumerate certain kinds of housing units with a high frequency of coverage error, those places should be oversampled in the 2010 CCM design. But this should be done while maintaining the ability to produce reliable estimates of net coverage error at some level of geographic and demographic detail. Given this modest tension, the panel believes that the Census Bureau has selected a design that may not sufficiently accommodate the pri­ mary goal of measuring and analyzing components of census coverage error. The state allocations of the Census Bureau’s proposed CCM sample design are too oriented toward producing state-level estimates of net undercoverage of comparable quality with the 2000 estimates. Instead, the new purpose of CCM in 2010 should be accommodated by modify­ ing the state allocations of block clusters to include more block clusters from states that are predicted to be harder to count, by including a greater degree of over­sampling of substate areas that are likely to be difficult to count (though the latter is clearly dependent on the Census Bureau’s as yet unspecified plans), or both. The analysis carried out by the Census Bureau of 10 sample designs

TECHNICAL ISSUES 85 for state allocations is thorough. However, with respect to substate alloca­ tions of block clusters, the ­Census Bureau might consider, in addition to oversampling medium and large block clusters with a high percentage of renters, oversampling block clusters with large percentages of individuals or housing units with other features that might be associated with census coverage error, such as: (1) small multi­unit structures, (2) a high percent­ age of foreign-born residents in 2000, (3) a high percentage of proxy interviews in 2000, (4) a high percentage of whole household imputation in 2000, (5) a high percentage of vacation homes, or (6) recent additions to the housing stock. In addition, as in 2000, the Census Bureau could oversample blocks in which there is a higher chance of geocoding errors or areas in which there was a high percentage of additions through the Local Update of Census Address (LUCA) Program or block canvass adds or deletes. It is likely that efforts devoted to modifying substate alloca­ tions will be more important than the state allocations, but both deserve attention. In addition to the above general suggestions, the panel has a specific suggestion for the 2010 CCM sample design that provides a reasonable compromise between designs that are focused on estimation of net cover­ age error and designs that focus on components of census coverage error. We urge the Census Bureau to evaluate a CCM sample design that retains the identical structure of the current census design for a substantial frac­ tion of the sampled units, possibly 60 to 75 percent (by making the obvi­ ous change to the sampling rates) and allocates the remaining sample to anticipated problematic regions or block clusters. Such a change would potentially provide a much greater number of census coverage errors to support models examining which factors relate to coverage error. At the same time, allocating the bulk of the sample to a general purpose design would limit the risk of inflated variances for net error estimates associated with finding large errors in unexpected locations. Research on what per­ centage to use and how this compares with the Census Bureau’s proposed design can be carried out using simulation studies of the type the Census Bureau has already carried out, though it also would be very useful to incorporate some accounting for any differences that are expected to be seen between 2000 and 2010 (possibly based on the ACS). In conducting additional simulations, we propose that the Census   In considering characteristics that can serve as the basis for oversampling, it is important to stress that even if some problematic circumstances are identified, it will generally be the case that very little individual household-level information could be used as the basis for oversampling since such information would have to be available on the MAF. However, area-wide frequencies of the same characteristics can often provide reasonable surrogates. For example, areas with many renters can be targeted, but one cannot target renters indi­ vidually for oversampling.

86 COVERAGE MEASUREMENT IN THE 2010 CENSUS Bureau also reconsider the metrics it uses to compare and assess 2010 CCM sample designs. In its simulations, the Census Bureau examined estimates of the coefficient of variation of estimates of net error, rate of erroneous enumerations, rate of omissions, and the rate of P-sample non­ matches. The Bureau also looked at coefficients of variation for net error estimates for groups of poststrata from 2000. The panel would like to suggest, in addition, the use of several additional types of metrics. ­Letting DSEi = the direct dual-systems estimate for an aggregate i (e.g., state by major demographic group), Ei = the E-sample total for an aggregate i, Pi = the P-sample total for an aggregate i, Ii = the number of imputations for an aggregate i, EEi = the number of erroneous enumerations for an aggregate i, and Mi = the number of matches for an aggregate i, the panel believes that the following metrics would provide more direct indication of the benefits of alternative CCM designs:  DSEi + Ii  EEi Ii  i i  ( i i) ( i i)  E + I  , E + I , and E + I The first metric is intended to be evaluated at the block cluster level (based on synthetic estimation), while the remaining two metrics are computed at the state level. The first metric,  DSEi + Ii   E +I   i i  is a local undercount estimate since DSEi + Ii is similar to the dual- s ­ ystems estimate and Ei + Ii is an estimate of the census count. The second metric, EEi (Ei + Ii ) is a measure of the percent of erroneous enumerations. The third metric, Ii (Ei + Ii ) is a measure of the percent of whole-person imputations. The last two metrics therefore assess the degree to which an area is encountering problems in enumeration. The goal then is to select a CCM sample design that produces estimates of these quantities at the indicated level that have substantially lower variances than those from the currently proposed design. Simulation studies of the design alternatives mentioned above, using these metrics, may identify designs that are nearly as effective as the Cen­ sus Bureau’s current design at estimating net coverage error at the level of states and major demographic groups while increasing the number

TECHNICAL ISSUES 87 of sampled census coverage errors. The panel believes there still may be sufficient time to carry out this analysis. The panel’s suggestion that the Census Bureau consider additional oversampling of difficult-to-count housing units in the 2010 CCM design is incomplete without considering what data should be used in sup­ port of this effort. Certainly, the Census Bureau could continue to use census and A.C.E. data from 2000, as in the above simulations, possibly making some effort to better identify erroneous enumerations and omis­ sions given the weakness of A.C.E. data for that purpose. However, that might be too time-intensive an activity as 2010 nears. Census, ACS, and StARS (see Chapter 3) data could also be used as the basis for an artificial population study, in which the components of census coverage error were “imposed” on the census enumerations. That is, statistical models using current best guesses for causal factors and their impact on coverage error could be developed, relating person, household, and ­contextual character­ istics to probabilities of duplication, omission, and being counted in the wrong location. Then, a number of simulated censuses and PESs could be conducted, with people and housing units missed, duplicated, and counted in the wrong place with various probabilities. Erroneous enu­ merations, as defined here (which excludes duplications and enumera­ tions in the wrong location), would be more difficult to incorporate in such a study since one does not have a base population to apply a model to. However, this component is likely the least important to address, and there may not be an effective causal model predicting which newborns are erroneously included in the census, which recent deaths are erroneously included, which visitors are included, and which fictitious individuals are included. If the suggested study is carried out, then, analyses in 2010 to identify which factors are and are not associated with various components of coverage error can be used to refine the ­models used for incorporating components of coverage error to better plan the coverage measurement data collection in 2020. Finally, a very serious complication in carrying out this research plan is that many of the most important predictive factors in statistical models of components of census coverage error will have to be indicator variables for the various census processes used in association with the enumeration of each housing unit or individual. (This requirement results from having a feedback loop that identifies census processes in need of modification.) However, the census processes are generally not represented on the stan­ dard census files or in A.C.E. in 2000. This lack strongly argues for the collection of a master trace sample (a sample of households for which the entire census procedural history is retained) in 2010 and that the designs of the CCM sample and the master trace sample be such that there is substantial overlap between them. For current work, and in the case that

88 COVERAGE MEASUREMENT IN THE 2010 CENSUS a master trace sample database is not constructed in 2010, the Census Bureau should determine how it can use census management information files to populate an analysis database that represents the components of census coverage error and as many as possible of the predictors discussed in Chapter 5. None of the approaches suggested here as to how to examine the optimal extent of oversampling problematic households is ideal, which is not surprising. The Census Bureau does not have good historic infor­ mation on how coverage errors are related to census processes, which makes targeting the sample much harder and which makes simulating the situation harder as well. However, the Census Bureau has acquired a lot of information about the circumstances that cause some of the cov­ erage errors and where those more problematic areas are located; those areas need to be oversampled to some extent. The coverage problems do change from census to census and some of the problematic areas are due to idiosyncrasies that appear for only a single census. Yet it is sensible to assume that much of the causal nature of coverage error is persistent across at least a few censuses, and that is what needs to be captured in the CCM survey. So focusing on areas with high proxy interview rates or high imputation rates in the previous census, on areas with a large percentage of vacation homes, or on areas with many small, multiunit housing units (though this has some difficult definitional and implementation aspects) is likely to be beneficial in the design of the 2010 CCM survey since such households have been and are likely to remain hard to count. Of course, over time, new problems will crop up, and old ones will be addressed, and so the process of census improvement will be a dynamic one. Given that the design of the 2010 CCM PES needs to target block groups that have a higher frequency of housing units that are vulner­ able to census coverage error, the Census Bureau should give serious consideration to alternative designs that, without sacrificing much effi­ ciency in estimating net coverage error, could provide a larger number of (anticipated to be) hard-to-enumerate households and individuals. Such a design would improve the estimation of parameters of the statis­ tical ­ models linking coverage error to census procedures. In particular, the Census Bureau should consider implementing a design that mixes a high proportion of cases selected using the current design with a smaller proportion of cases in hard-to-enumerate areas. This design could be assessed through simulation studies like those the Bureau has already used, supplemented by additional metrics suggested here. Recommendation 6: The Census Bureau should compare its sample design for the 2010 census coverage measurement postenumeration survey with alternative designs that give greater sampling prob-

TECHNICAL ISSUES 89 ability to housing units that are anticipated to be hard to enumer- ate. If an alternative design proves preferable for the joint goals of estimating component coverage error and net coverage error estimation, such a design should be used in place of the current sample design. LOgistic regression models In the last few years the Census Bureau has devoted a considerable amount of its resources on coverage measurement research to improv­ ing the estimation of net coverage error in 2010, with a primary focus on developing two logistic regression models to replace poststratification to address correlation bias. Any small-area estimates of net coverage error will likely be based on these same logistic regression models, replacing the use of (so-called) synthetic estimation. Both poststrata and synthetic estimation were used in the coverage measurement programs in 1990 and in 2000, so the current plan is a substantial change to the estimation of net coverage error at the level of both large and small domains. Despite the new focus on estimating components of error, there remains good reasons for devoting considerable attention to the estimation of net error. First, given the focus in 2000 on net error estimation, the data avail­ able from A.C.E. are not directly useful as substitutes for the data on components of coverage error that will be collected in 2010. An important example of this is the different definitions of correct enumerations in 2000 and 2010, which suggests that the frequency of erroneous enumerations and omissions will likely be substantially less and of a somewhat different nature in 2010 than they were in 2000. As a result, any attempts to model the 2000 A.C.E. data without accounting for various differences between 2000 and 2010 will probably provide limited guidance for how to estimate components of coverage error in 2010. (However, we believe that some efforts in this direction are warranted.) Second, as argued in Mulry and Kostanich (2006), estimating net coverage error facilitates estimation of the number of census omissions. Therefore, some focus on estimation of net coverage error is justified. Third, as noted in Chapter 2, strong interest remains for many census data users in the assessment of net coverage error, in particular for demo­ graphic groups, but also for states and cities.  Although the Census Bureau will use estimates of net coverage error to develop estimates of the number of census omissions for domains that will support various tabulations, we hope that the Census Bureau will develop analytical models based on the P-sample indi­viduals that are determined to be census omissions. The main disadvantage of doing this is that this analysis may miss the types of census omissions that are not captured in either the census or the P-sample, which are collectively estimated using the Census Bureau’s approach.

90 COVERAGE MEASUREMENT IN THE 2010 CENSUS The Census Bureau plans to use logistic regression for fitting the probability of match status for the P-sample and the probability of cor­ rect enumeration status for the E-sample. Logistic regression is more flexible than poststratification in terms of handling continuous predictor variables and selective use of interactions among predictor variables. This flexibility potentially allows inclusion of more predictor variables without increasing the variance of estimated probabilities. Furthermore, logistic regression is a model that, in this context, is applied at the level of the individual; therefore, information collected at that level can be easily used in conjunction with information that is collected at a more aggre­ gate level. Finally, not only is logistic regression likely to be better than poststratification in estimating net coverage error for these reasons, but it is also much better suited for the analytic purposes of providing a better understanding of which factors are and are not related to net coverage error than poststratification. Poststratification is mentioned in the earliest literature advocating the use of dual-systems estimation (DSE) to measure populations (Sekar and Deming, 1949), and it has been used in the census since the 1980 postenumeration program to reduce correlation bias. Poststratification simply means that one partitions the CCM sample data into groups that are more homogeneous and then separately estimates the adjusted popu­ lation counts (C − II)  CE   M  ,  E      P within those poststrata. A perfect poststratification would partition the P-sample population and the E-sample population so that the under­ lying enumeration propensities for individuals in a poststratum were identical. However, this is unattainable and therefore the practical goal is to partition the sample cases so that individuals are more alike within a post­stratum than individuals are from different poststrata. If this is accomplished, correlation bias should be reduced (see Kadane et al., 1999, for details). Poststratification also supports the use of synthetic estimation, which carries down adjustments to census counts to low geographic levels. Syn­ thetic estimation makes use of coverage correction factors,  C − II   CE   P   C   E   M ,       See Chapter 3 for definitions. Note that CE is defined consistent with the definition of a correct enumeration in A.C.E., that is, an enumeration that is located in the search area.

TECHNICAL ISSUES 91 which are applied to any subpopulation in a poststratum by multiply­ ing the appropriate factor by the relevant subpopulation’s census count to produce the adjusted count for that subpopulation. To produce geo­ graphic estimates, which often requires adding subpopulations that belong to different poststrata, one simply sums the associated adjusted counts. Estimates of the variance of synthetic estimates for small domains are necessarily a combination of estimates of the variance of the coverage correction factors for the poststrata involved (depending on the domains) and a residual variation due to any unmodeled heterogeneity of the rel­ evant subpopulations of interest within the required poststrata. The first component can be estimated by standard methods. However, estimation of the second variance component is more difficult. As mentioned above, although poststratification has the advantages of reducing correlation bias and supporting synthetic estimation, a major disadvantage is that, as applied by the Census Bureau, it allows only a relatively small number of factors to be included in the poststratification scheme (and in the resulting synthetic estimation). This limitation exists because the Census Bureau typically includes the full cross-classification of the factors used to define the poststrata, and, as a result, the individual poststrata quickly become very sparsely populated, despite the large sample size of the PES. Use of more post­stratification factors, and there­ fore more poststrata, trades off greater homogeneity in each poststratum at the price of higher sampling variances for the coverage correction factors. Furthermore, the fact that the various poststrata generally share some characteristics with other poststrata (for instance, there are many poststrata for Hispanic women) is generally ignored in the associated estimation. As a result, there is a failure to pool information when it may be beneficial to do so. The alternative that is being planned for use by the Census Bureau in the 2010 CCM is logistic regression of both the binary match/nonmatch variable and the binary correct enumeration/not correct enumeration variable. Poststratification is a special case of logistic regression in this context in which the predictors of the logistic regression are indicator variables for membership in the categories defining the poststrata, and all interactions are included in the model. In theory, for the same reasons that logistic regression may be preferred to poststratification at the aggregate at which that analysis is carried out, small-area estimates that are based on the probabilities of match and correct enumeration status estimated using logistic regression could improve on those provided through syn­ thetic estimation by effectively averaging over more of the data. In the following, a number of issues relevant to the use of logistic regression are raised and discussed, and a variety of suggestions are

92 COVERAGE MEASUREMENT IN THE 2010 CENSUS given. In addition, it should be noted that Chapter 5 contains Recommen­ dation 10, which provides a list of key issues that need to be addressed by the Census Bureau in going forward with research on this topic. Logistic regression was first suggested for use in the general area of DSE by Huggins (1989) and Alho (1990) and specifically applied to c ­ ensus undercoverage in Alho et al. (1993). However, these studies did not consider how to treat cases with unresolved match status or unresolved correct enumeration status, and they made use of the data only from the P-sample blocks, rather than the full census. Haberman et al. (1998) introduced some additional features that addressed the above limita­ tions. They proposed two separate logistic regressions to model match status (using P-sample data) and correct enumeration status (using the E-sample data). To represent cases with unresolved match status (with a completely analogous discussion of correct enumeration status), two records are constructed, one “matched” and the other “unmatched,” and weights are used to represent the “probability” that a given record matches to the census, given the available characteristics. (Match and cor­ rect enumeration probabilities for unresolved cases could be provided by a computer matcher developed by the Census Bureau.) Survey weights are also attached to all the records to reflect the complex sample design. This approach is the Census Bureau’s leading candidate to support net coverage error modeling in 2010. One can see how these two logistic regression models relate to DSE as follows. In the formula for DSE, (C − II)  CE   M  ,  E      P the probability that a census enumeration is correct is the second factor, and the probability of a match is the inverse of the third factor. Therefore, the two logistic regression models estimate two of the three main factors in DSE, with the remaining factor being the number of matchable enu­ merations in the census (and is directly measured). Using logistic regression, synthetic estimation can now be replaced ˆ by the following methodology. Letting pCEi represent the estimate from logistic regression of the correct enumeration probability for person i and ˆ letting pMi represent the estimate from logistic regression of the match probability for person i, the estimated number of people in a small area ˆ ˆ is the sum of the ratio pCEi / pMi over the individuals i in that area (ignor­ ing the treatment of cases with insufficient information for matching). A grouped jackknife procedure is used to obtain the standard errors of the   In this discussion, we are ignoring missing data in covariates, which introduce some complexities.

TECHNICAL ISSUES 93 small-area estimates. If the explanatory variables are limited to those col­ lected in the census and are not characteristics or process variables from CCM, small-area estimates for any subdomain can be computed directly using the method just described. However, as noted above, this approach sacrifices the additional predictive power of covariates collected for cases in the P-sample. Techniques suggested by Eli Marks may be used to accommodate the use of P-sample variables at the subnational level (for details, see Marks et al., 1974). There are some complications in using this approach to small-area estimation. One issue is how to incorporate the survey weights in the model-building and model-fitting processes. The CCM PES sampling weights need to be incorporated not only in the estimation of the logistic regression coefficients, but also in the decision as to which predictors to include in the logistic regression models and which model form to use, as well as in estimating the variance of the resulting estimates. The ques­ tion of how to treat the complex sample design in these types of models has a substantial research literature. The approach taken by the Census Bureau is to weight the cases using the sampling weights. An alternative approach, which may produce more efficient estimates, is to include the variables that make up the sampling weights as predictors in the model (see, e.g., Little, 2003). Comparisons of these two approaches would be of interest. A second complication is the treatment of missing data. Specifically, it is not clear how to effectively treat cases with insufficient information for matching in the estimation of the relevant logistic regression coeffi­ cients. Regarding the small-area estimation that results from the use of the logistic regression model, it is also not clear how to treat non-data-defined   The Census Bureau has examined competing estimators that all have empirical deficien­ cies in comparison to the above estimate. As mentioned, the estimate for the population of a domain is ∑p ˆ CEi ˆ / p Mi for all individuals i in a domain. A competing estimator that the i Census Bureau has mentioned is ∑w p ˆ j CEj ˆ / p Mj , which is now a sum over the individuals j in j the PES blocks and in the relevant domain. Another competing estimator replaces the correct ˆ enumeration probability, pCEj , in these two alternatives by an indicator function for those individuals in the domain that had correct enumeration status, reducing the modeling to only the logistic regression model of match status. The problem with these two alternatives is that they are too sensitive to sampling variation. The Census Bureau has also considered variants of these two alternatives by reweighting the data elements so that the data-defined people in the E-sample are ratio adjusted to the census counts in poststrata. A further prob­ lem with some of these approaches is that small-area estimates do not necessarily sum to the estimates for larger areas.   Another complication which we do not discuss is that the adjustments made on the A.C.E. research file have resulted in the dependent variables occasionally lying outside of the interval (0,1).

94 COVERAGE MEASUREMENT IN THE 2010 CENSUS cases in the census. To address this complication, we provide some guid­ ance below in the section on missing data issues in coverage modeling. The Census Bureau has focused much of its efforts to date regarding developing the logistic regression approach on the performance of six models for both the P-sample matches and the E-sample correct enu­ merations. These logistic regression models all use explanatory variables that are indicator variables of various combinations of the levels of six factors used to define the 416 poststrata used in the March 2001 net under­coverage estimates: race/origin (seven groups), age/sex (seven groups), tenure (owner, nonowner), metropolitan statistical area/type of enumeration area (MSA/TEA) (four groups), region (four groups), and mail return rate (high or low, with boundaries dependent on race/origin domain). There are six sets of explanatory variables: 1. The 416 indicator variables for the poststrata used in the March 2001 poststratification. 2. The 150 main effects and first-order interactions of the variables used to define the March 2001 poststratification. 3. The 23 main effects of the variables used to define the March 2001 poststratification. 4. The 98 main effects and all interactions from the variables for three of the six factors from the March 2001 poststratification—race/ origin, age/sex, and tenure. The acronym ROAST (race/origin, age/sex, tenure) is used to distinguish this reduced set of factors from the full set used in the 2001 poststratification. 5. The 62 main effects and first order interactions from ROAST. 6. The 14 main effects from ROAST. These six models were fit to data from the A.C.E. research database (for further details, see Griffin, Mule, and Olson, 2005). The Census Bureau was interested in comparing the five logistic regression models (models 2–6) against the 2001 poststratification (model 1) with respect to the ability to fit A.C.E. data in a predictive setting. A predictive setting is appropriate since the models are fit using PES data but are then applied to the entire census data set for small-area estima­ tion. Certainly, if any of the models 2–6 were evaluated to be equivalent   Type of enumeration area indicates which of the three main types of initial enumeration was used: (1) mailout–mailback, (2) list–enumerate, or (3) update–leave.   The number of interactions does not correspond to the situation of fully crossed effects since the poststratification used in 2000 did not fully cross the six variables. For example, the poststrata of American Indians or Alaskan Natives living on a reservation is only crossed by age/sex and tenure, but not by MSA/TEA, region, or mail return rate, and this extends to the ROAST models.

TECHNICAL ISSUES 95 to model 1 in such a comparison, that would argue for use of logistic regression in replacement of poststratification, since logistic regression can make use of more variables, especially continuous variables, in addi­ tion to those used to define the poststrata. A more appropriate comparison would seem to be between the 2000 poststratification and logistic regres­ sion models that allow the use of additional variables chosen to provide additional predictive power. When comparing nested models—that is, models that are identical except that some of the parameters in one of the models have been con­ strained to be constant (typically zero, which is equivalent to removing the associated predictor from the model)—the distinction between fitting and prediction is typically represented by adding a penalty factor to a goodness-of-fit measure for including additional predictors in the larger model. As in linear regression, the additional parameters guarantee that the model with the larger set of predictors will fit the data at least as well as, if not better than, the more parsimonious model, but this advantage may be offset by the increased variances of the fitted values, due to the estimation of more parameters. The combination of the goodness-of-fit statistic and the penalty for additional parameters reflects this tradeoff. Haberman et al. (1998) suggested using a logarithmic penalty function to address this issue, with a jackknife bias estimate to adjust for the use of unnecessary predictors (overfitting). Measures such as ­ Mallows’ Cp ( ­ Mallows, 1973) and the information criteria AIC and BIC also provide use­ ful penalties for comparing regression models in a predictive situation. The situation for comparing nonnested models is less straightforward but important to address since the Census Bureau may need to make such comparisons. For example, one such nonnested alternative put forward by the Census Bureau separates the modeling of undercoverage into two models, one for the probability that an entire household will be missed and another for the probability that an individual in a partially enumer­ ated household will be missed (for details, see Griffin, 2005). The panel agrees with the Census Bureau that cross-validation would be a suitable technique for comparing rival nested and non-nested models. In cross-validation, the sample is split so that the model can be fitted to one part, and the accuracy of predictions evaluated on the other part; the accuracy of prediction is thus not overstated due to fitting and evalu­ ating the model on the same data. A standard approach is to split the data into several equal-sized pieces and remove each piece in turn from the fitting data set. The performance of each fitted model is assessed using some loss function in predicting the values for the set-aside fraction, and the loss function is averaged over all of the replications so defined. The Census Bureau implemented cross-validation using 100 equal- sized groups, and the loss function used was the logarithmic penalty

96 COVERAGE MEASUREMENT IN THE 2010 CENSUS function from Haberman et al. (1998). Finally, the average over all 100 groups was weighted using the A.C.E. survey weights. The results of the Census Bureau’s cross-validation comparison of the five alternative logistic regression models to the 2000 A.C.E. post­ stratification (Griffin, Mule, and Olson, 2005) are given in Table 4-1. The Correct Enumeration column provides the cross-validation statistic for each of the six models in estimating the correct enumeration rate, and the Match column provides the cross-validation statistic for each of the six models in estimating the match rate.10 The similarity of fit across the models suggests that many of the interactions in the poststratification model are relatively small. These findings also support the view that even the most effective of the five alternative models, model 2, offers only minor benefits over the full poststratification. However, it should be noted that these models are limited to the use of the variables in the 2000 poststratification and do not assess the potential of other predictors or model forms. The panel also used cross-validation to assess the impact of the use of survey weights on the performance of the model (following DuMouchel and Duncan, 1983). Using the logistic regression model with only the main effects from the poststratification (model 3), we formed 100 groups for the cross-validation. (This was done in two ways to examine the degree to which the block clusters were homogeneous. In one compu­ tation, we randomly selected P- and E-sample people into 100 groups for cross-validation without regard for block cluster membership; in the second computation, we randomly selected P- and E-sample people into 100 groups while maintaining the block cluster structure of the A.C.E. sample design.) Using the cross-validation, we compared the performance of the logistic regression model unweighted by the survey weights with the performance of the logistic regression model weighted by the survey weights. We assessed performance using a weighted log likelihood pen­ alty function. The results are given in Table 4-2. The results suggest that use of the survey weights in computing the logistic regression coefficients substantially improves performance in comparison with unweighted fit­ ting, as assessed by the weighted criterion. This result raises the possibil­ ity that inclusion of the survey design variables as predictors may provide a distinct improvement in these models. Recently, the Census Bureau (Mule et al., 2007), motivated by non­ linear residual-type plots, substituted a spline function for the ­(essentially) 10  The ranks observed of the average weighted log likelihoods across models—and to a substantial degree even the average weighted log likelihoods themselves (not shown here)—did not change when the number of cross-validation replications changed from 100 to 25 or 20 (also not shown here).

TECHNICAL ISSUES 97 TABLE 4-1  Cross-Validation of Six Preliminary Logistic Regression Models: Average Weighted Log Penalty Function No. of Correct Model Parameters Enumeration Match 1. Poststratification 416 .2351 .2603 2. Main Effects and Two-Way Interactions 150 .2349 .2598 3. Main Effects   23 .2354 .2598 4. ROAST   98 .2355 .2617 5. ROAST Main Effects and Two-Way Interactions   62 .2355 .2618 6. ROAST Main Effects   14 .2360 .2619 NOTE: The logarithmic penalty function that was used in the cross-validation for the correct enumeration rate modeling was  1    −  ∑ w  p log ( pCEi ) + ( 1 − pCEi ) log ( 1 − pCEi )   , ˆ ˆ WE i∈E − sample ei  CEi     where WE is the weighted total for the E-sample, wei is the sampling weight for the jth ˆ E-sample individual, pCEi is the correct enumeration status, and pCEi is the predicted c ­ orrect enumeration status from the model. An analogous function was used for modeling the match status. Given the negative sign in this expression, smaller values of this statistic imply a better fit to the data. TABLE 4-2  Cross-Validation Assessment of the Effects of Survey Weights Average Log-Likelihood Penalty Function Over 100 Cross-Validated Replications Weighted/ Unweighted Type of Selection E-sample P-sample Unweighted Random selection .2782 .3278 Maintain clusters .2785 .3281 Weighted Random selection .2357 .2615 Maintain clusters .2360 .2619 NOTE: See note to Table 5-1 for details on the average log-likelihood penalty function. four indicator variables for age ranges, which provided a piecewise linear and quadratic function that was selected to fit the observed relationships between age and both match rate and correct enumeration rate. This sub­ stitution would not have been available using poststratification. Initial indications are that this substitution provides only modest benefits for the overall fit of the logistic regression models, but there may be substantial advantages for estimation of specific demographic groups, particularly

98 COVERAGE MEASUREMENT IN THE 2010 CENSUS age groups that are not well-estimated by the four-part step function for age, which is roughly people aged 17–21. The panel strongly supports this research. Other continuous variables that might be productive are contextual variables that are estimated at the area level, such as percent vacant, percent renter, and percent minority. Along these lines, the panel strongly suggests moving away from the predictors that are identical to those used in the previous post­stratification to more fully exploit the flexibility of logistic regression. However, as mentioned in the discussion concerning use of logistic regression ­models to substitute for synthetic estimation, any predictors used in these logistic regression models must be available from the census to support estima­ tion of net census error for any domain (at least in the form currently proposed for use by the Census Bureau). This requirement restricts the available predictors to functions of the six factors used to define the A.C.E. ­poststratification, a few additional variables from the short form in 2010, any variables collected during census processing, and contex­ tual variables collected at aggregate geographic levels (say, from the A ­ merican Community Survey or StARS). Schindler (2006) examined many of these possible variables to assess whether they provided sub­ stantial additional benefits as additional factors in producing poststrata (but not in a logistic regression approach). He considered the following variables: (1) ­geographic—census region, state, urban–rural, and mode of census data collection (mailout–mailback, list–enumerate, or list–leave); (2) contextual variables at the tract level—mail return rate, and percent­ age minority; (3) family and housing variables—marital status, relation to the head of household, and structure code (single unit or multiunit); and (4) census operational ­ variables—­indicator of mail or enumerator return, date of return, and proxy status. This research did not identify any variables that provided substantial benefit over the 416 indicator variables from the post­stratification used in A.C.E. This analysis, while extremely important, should not be considered conclusive. For example, in the related problem of examining large num­ bers of subsets of a collection of possible predictors for use in regression- type models, it is difficult to know whether one has missed an effective combination of variables (or transformed versions, or interactions, etc.). This is complicated work, and it may be that further examination of potential predictors may still prove useful. The panel notes that to assess the novel contributions of sets of covariates that are highly correlated, principal component analysis might provide useful insights. As with all statistical models used in predictive settings, the explana­ tory variables used should, to the extent possible, be consistent with what is known about the factors that are related to census coverage error. There have been a variety of studies, especially ethnographic work, that inform

TECHNICAL ISSUES 99 understanding about why certain housing units are missed in the census and why people with various characteristics are missed in otherwise enu­ merated housing units. (A good bibliography is provided by Jaros, 2007.) This information is moderately consistent with the variables currently included in the logistic regression models being examined by the Census Bureau, but the linkage between the research findings and the predictors in these models is not as direct as one would like. The logistic regression models should reflect what is known about the sources of census coverage error, to the extent that this information is represented on the short form and in available contextual information. The panel believes that the Census Bureau’s initial focus on the covariates previously used in the 2000 poststratification, rather than a broader look at the variables available, was due to the desire to determine relatively quickly whether such a technique was an improvement over poststratification and could be relied on in a production environment. This was important to determine, but the use of additional covariates is not a severe complication, and no major additional software development would be required. There may therefore have been an unnecessary focus on evaluating a model very similar to the poststratification used in 2000. We hope that there is still time to operate in a more exploratory manner prior to 2010. The six models that have garnered the majority of attention to date are too similar to each other to learn enough about what collec­ tions of predictors will work well. The Census Bureau should therefore expand the important research carried out by Schindler (2006) and apply it to the logistic regression models, attempting to identify additional use­ ful predictors for match rate and correct enumeration rate, using cross- validation to evaluate the resulting logistic regression models. Another issue is whether the same predictors are planned for use in both the logistic regression model for match rate and correct enumeration rate in order to eliminate so-called balancing error. Balancing error here would be a generalization of the situation that occurs in poststratification when an overcount on the P-sample side that would normally balance with an undercount on the E-sample side no longer balances due to the P- and E-sample cases being included in different poststrata as a result of the use of different stratification variables. This situation occurred with the 2000 census coverage measurement program, when the poststratification used in A.C.E. revision II was modi­ fied from that used in the original A.C.E. The Census Bureau decided to include two new factors in the poststratification that were available only for the E-sample due to their predictive power, resulting in different post­ strata for the E-sample and the P-sample, believing that the new factors would provide preferable poststrata for estimation of the probability of correct enumeration status. The new factors were a variable indicating

100 COVERAGE MEASUREMENT IN THE 2010 CENSUS whether the E-sample enumeration was or was not a proxy enumeration and a variable indicating the type of census return (early mail return, late mail return, early nonmail return, and late nonmail return). While the addition of these variables to the E-sample poststrata certainly improved the partitioning of the E-sample into more homogeneous groups to reduce correlation bias, there was a concern that the difference in poststrata for the E-sample and the P-sample might cause a substantial number of failures in the balancing assumption. For example, a proxy interview often results in an enumeration with insufficient information for matching. Insufficient information enumerations were treated in 2000 as if they were erroneous enumerations, and the P-sample enumerations that would have matched to those cases were treated as census omissions. Since the E-sample cases were proxy enumerations and therefore placed in a poststratum that did not exist for the corresponding P-sample cases for A.C.E. revision II, these errors would be unlikely to balance. A related issue can arise in the application of logistic regression m ­ odels of both the match rate and the correct enumeration rate, but it is substantially more difficult to assess. If the variables differ for these two logistic regression models, coverage rate estimates for some combinations of these variables might be biased, although it is not known whether this would cause bias for the domains (defined by geography, age, race/­ ethnicity, etc.) that are of interest for census estimation. It is therefore important to determine when the benefit of improved predictive power for one of the two logistic regression models would outweigh the loss from the lack of balance. Differences in the covariates used for the two logistic regression m ­ odels could arise for two reasons. First, as mentioned above, some variables are not available for both the E-sample and the P-sample, since the P-sample interview is much more detailed. Second, even when the same variables are available for both the P-sample and the E-sample, their degree of predictiveness could differ markedly between the two logistic regression models. In the second case, by insisting on the use of identical predictors one may be sacrificing additional predictive power. Given that, if the balancing error is discovered to be relatively modest, one might still have a good reason to use different predictors in the two logistic regression models. As an example, suppose erroneous enumerations were much more frequent for those enumerated using personal follow-up interviews in comparison with those enumerated using mailed back questionnaires (which was not the case in 2000). If there is no reason to believe that the same distinction would hold for omissions, it would then be sensible to include a variable that measured the rate of mail return as a covariate for the logistic regression model of correct enumeration status, but not

TECHNICAL ISSUES 101 to include it as a covariate in the logistic regression model for match status. This problem may be reduced in the 2010 census given the collec­ tion of more data on census residence, the removal of duplicates during the census, other data improvement processes, and given the improved matching of KE cases (for the definition see the section below on missing data). To assess this tradeoff, one has to evaluate the quality of the result­ ing dual-systems estimates; to do that, the Census Bureau may have to make use of artificial populations analysis in which the true counts are known. One way of satisfying this restriction for having the same predic­ tors for both logistic regression models—but still retaining some of the predictive power of the excluded predictors—is to use predictors that are tabulated at the area level rather than at the individual level. This would therefore permit the use of predictors that were process or composition measures for small areas, which might provide substantial additional predictive performance. Finally, it should be stressed that this balancing problem is only relevant to the estimation of net coverage error—it does not arise in modeling the frequency of components of census coverage error. The panel supports further work in developing logistic regression models, given their promise, particularly in looking for the benefits of additional covariates (again, including transformations and interactions). We add two additional possibilities for broadening the approaches under consideration that the Census Bureau may wish to consider. First, it is not necessary that one logistic regression model be used nationwide. Different regression coefficients and even different predictors could be used for dif­ ferent geographic or demographic domains. Second, logistic regression is only one of many statistical models that predict a dichotomous dependent variable. This is a discriminant analysis problem, and there are a number of more flexible methods—such as classification and regression trees, recursive partitioning, support vector machines, and modeling with flex­ ible link functions—that have been shown to have applicability in prac­ tice. For instance, classification trees develop a tree structure of decision rules that select increasingly refined subsets of a population of interest, for which the subsets are identified by the joint range of values defined by the possible predictors and at each stage the subsets are selected to best discriminate between those that match or are correct enumerations and the remainder. Such an approach avoids the assumption of linearity used in logistic regression modeling and is therefore more flexible. Recent work by Wang et al. (2006) supports the benefits of such an approach in forming poststrata. Even if such an approach was not used in a produc­ tion capacity in 2010, new information about what types of people or

102 COVERAGE MEASUREMENT IN THE 2010 CENSUS addresses fail to match in the census might be discovered through use of these techniques. In 1999 the Census Bureau examined the ability of classification trees to create poststrata in comparison with the variables ultimately used in 2000. The poststrata produced did not perform any better than the ones used. Classification trees were also used to determine adjustment cells for unresolved cases using data from the 1998 dress rehearsal. In this case, the use of an additional variable, people aged 18–29 living at home, was discovered to provide some benefit. It is common to find that the benefits in going from logistic regression to these more flexible approaches are m ­ odest. However, the work the Census Bureau has carried out to date, though closely related, is not identical to what is being suggested. Further­ more, by examining alternative approaches, the Census Bureau will learn more about the patterns in the data at a very small cost since no additional data collection would be needed. In addition, consistent with the new priority for coverage measure­ ment in 2010, it is important that the Census Bureau consider directing more resources now to another group of discriminant analysis problems in addition to modeling match rate and correct enumeration rate. The search for correlates of components of census coverage error—the first step in developing a feedback loop for census improvement—will involve modeling the frequency of census omissions, erroneous enumerations, duplications, and enumerations in the wrong location (for various defini­ tions of wrong location). Again, these are efforts to discriminate between membership in two groups using a set of predictors and can therefore be addressed through use of logistic regression or the other approaches listed above. In such a research effort, the predictors that are clearly effective in the logistic regression models for match status and correction enumeration status may not be the same predictors that are effective in modeling the components of census coverage error. We describe the various types of predictors that should be considered for use in these models in Chapter 5. The key point is that the Census Bureau needs to devote considerable effort to determining both the best general form of statistical model for the four components of coverage error (i.e., logistic regression, classification tree, etc.) and which predictors are most effective in modeling the differ­ ent dependent variables. The predictors that show predictive power are then potential candidates for causal factors of census coverage errors. MISSING DATA IN NET COVERAGE ERROR MODELS Missing data touches many aspects of DSE. In this section, we pres­ ent principles for handling missing data and discuss those principles in

TECHNICAL ISSUES 103 the context of one problem, logistic regression modeling of the match rate. Missing data affects net coverage error models in the following areas: • P-sample noninterviews: households in the P-sample that either could not be found at home to interview or were unwilling to cooperate; • missing characteristics in P-sample interviews: households for which the answers to some questions were not provided; • unresolved P-sample match status: the in-person follow-up inter­ view was unsuccessful in determining whether or not there was an E-sample match to a P-sample individual, often due to an incom­ plete interview; • unresolved P-sample residence status: the in-person follow-up inter­ view was unsuccessful in determining where that person should have been enumerated, often due to an incomplete interview; • unresolved E-sample enumeration status: the in-person follow- up interview was unsuccessful in determining whether a person should or should not have been enumerated in the census, often due to an incomplete interview; • missing data for individuals not in the P-sample: missing charac­ teristics information used as covariates in the logistic regression models; and • missing data for component errors: assessment of duplicate s ­ tatus, erroneous enumeration status, or whether someone was counted in the wrong place can be missing, often due to incomplete interviews. These missing data problems are currently addressed by some form of imputation. Four general principles of imputation are worth bearing in mind when assessing and refining current approaches (see Little and Rubin, 2002:Chapters 4–5). First, imputations of sets of missing variables should be multivariate, to preserve relationships between them. Second, one should use draws rather than means: imputing draws from a predictive distribution of missing values rather than means avoids bias for ­estimates of nonlinear quantities (as in CCM equations); the hot-deck imputation method is a form of a draw. Third, imputations should be conditional on predictive covariates to the extent possible on all predictive auxiliary infor­ mation. Fourth, standard errors should incorporate measures of imputation uncertainty, using multiple imputation or replication methods. From the perspective of these principles, current imputations based on the hot deck are draws, and there is some attempt to condition on

104 COVERAGE MEASUREMENT IN THE 2010 CENSUS covariate information, although the full range of such information is not always exploited (see below). It does not appear that imputation uncer­ tainty is included in estimates of standard errors, and the question of whether imputations preserve associations needs some attention. To illustrate the application of these principles, we consider the prob­ lem of missing data in the context of estimation of the match rate, since the assessment of match status plays a fundamental role in DSE. We believe that many of the ideas are also applicable to the other missing data problems mentioned above. Currently, the Census Bureau first imputes missing characteristics for the P-sample interviews. Next, using those imputed values along with the collected P-sample values and a before-follow-up match code, a logistic regression model is used to impute match status. As in most situations involving the treatment of missing data, the properties of pro­ posed ­missing data treatments need to be assessed in the context of the complete data problem. Here, the complete data problem is the matching of the P-file and the E-file, when the P-file consists of the data collected for the individuals represented in the P-sample, and the E-file consists of the data collected for the individuals represented in the E-sample, namely those people counted in the census at residences in the PES block ­clusters. Figure 4-1 depicts the matching problem. In this figure, X represents variables used in the logistic regression (e.g., age, sex, race, and owner­ ship status), Z represents auxiliary variables that are not in the logistic regression models but could be used in the matching operation (e.g., P file E file Match X 1P ... X pP Z1P ... ZqE X1E ... XpE Z 1 ... Zq E E M ? PE Matches ? 1 1 Missing match ? ? ? status ? ? 0 0 ? Missing E-file 0 characteristics P non-matches 0 Missing P-file characteristics FIGURE 4-1  P-file and E-file matching problem.

TECHNICAL ISSUES 105 name and detailed date of birth), and match status, M, is a binary func­ ( ) tion M XiP , ZiP , XiE , ZiE of the characteristics for individuals in the E- and P-files (using an obvious notation). The current procedure uses logistic regression with M as the dependent variable and XP as the set of potential independent variables, ­augmented by the before-follow-up match code, which makes some use of XE. Two complications that are ignored for the purposes of this discussion but need to be incorporated in a more complete analysis are the modeling of the probability of incorrect enumerations, which are modeled as another binary variable, and the fact that there is clustering of match status within households. (Cowan and Malec, 1986, provide one method for addressing the latter issue.) Missing information occurs in two respects. First, some of the Xs and Zs are missing for both the E-file and the P-file records for various reasons. These missing data complicate the problem of assessing match status. Second, partly due to the missing Xs and Zs, the derived variable M is sometimes missing. The Census Bureau tends to “compartmentalize” these missing data problems—first solving the problem of missing Xs and Zs, and then addressing the problem of missing Ms—in estimating the parameters for the logistic regression model for match status. However, it is better to conceptualize the problem as multivariate missing data, since fully effective imputation for missing data needs to preserve the relationships between various sets of missing and non­ missing values for all the variables that have missing data. It also helps to specify the complete set of potential auxiliary variables and to elucidate the multivariate missing data patterns involving the dependent, indepen­ dent, and auxiliary variables because it is important to preserve associa­ tions between variables that are frequently both missing, and missing Ms are strongly related to missing Xs and Zs. The goal here is to improve estimates of the regression coefficients of the logistic regression of M on X by the effective use of cases with incom­ plete data. The key issue to determine is the amount of information in the incomplete cases. If there is effectively no useful information, these cases should be dropped, since imputation does not improve the estimates. We consider separately those cases with M observed and those cases with M missing. If M is observed, cases with components of X missing have the poten­ tial to contribute substantial information to the regression. By the con­ ditionality principle (essentially, any inference should depend only on the observed outcomes and not on any unobserved outcomes that might have been observed), imputations of components of X should condition on match status, which is important to avoid bias, and on any auxiliary variables that would help to predict missing Xs.

106 COVERAGE MEASUREMENT IN THE 2010 CENSUS If, however, M is missing, the cases will contribute little to the quality of the logistic regression coefficients unless M is imputed using auxiliary variables Z that are strongly predictive of M. If such information is avail­ able, it is important to condition on it in the imputation model. If not, imputing cases with M missing simply adds noise. Auxiliary information could, in principle, include data from the E-sample (in particular, data from the closest potential match). The Census Bureau’s use of the before- follow-up match code is a good example of the implementation of this principle. For the problem of imputation of match status, M, consider the follow­ ing three possibilities for sets of conditioning information for imputing match status: P 1. condition on Xobs ,i : the observed predictors from the P-file in the logistic regression model for match status. This corresponds to the current procedure. P P 2. condition on Xobs ,i , Zobs ,i : the observed predictors in the logistic regression model for match status, and the observed auxiliary vari­ ables used for matching P P {( E E )} 3. condition on Xobs ,i , Zobs ,i , and Xobs , j , Zobs , j j ∈Si : the set of best matches, which adds the information in the E-file of the best matches to case i. These three alternatives, which differ with respect to the information on which to base an imputation of match status, can yield very different imputations. Conditioning on more variables is preferred if the additional variables improve the prediction of M. To demonstrate the advantages of alternative 2 with the intermediate degree of conditioning, one can imagine a number of situations involv­ ing name, which is typically used to determine whether records match but is not typically suggested for use in the logistic regression model to impute match status. Situations range from those in which the E-file and P-file records have the same name, some with spelling inconsistencies but the names appear to match, and some without a name on one or both the E-file, the P-file, or both. It is likely that the logistic regression-based probabilities of a match should be different for these various situations. Furthermore, one can exploit any unused information to check on one’s assumptions. To demonstrate the advantages of alternative 3, which is the most comprehensive form of conditioning in which all relevant variables are used, consider the following example of a P-sample case with race miss­ ing. Also assume that this case matches to an E-sample case that has race identified as “Asian or Pacific Islander.” An imputation that does not

TECHNICAL ISSUES 107 condition on E-sample information may incorrectly impute a race other than Asian (likely white). Even a close potential match to an E-sample case with Asian or Pacific Islander as the race increases the probability that the P-sample case has that racial characteristic. As another example, suppose one has a record with a name from the P-sample, and 10 records with very similar characteristics but without names from the E-sample (actually, the census records in the P-sample blocks). It is likely that one of those 10 records matches the P-sample record—or certainly more so than in s ­ ituations in which no E-sample records have similar characteristics. The Census Bureau appears to be assessing match status, in which there is missing information for various characteristics, based on both P-file and E-file information: however, the Bureau’s logistic regression model for the imputation of match status uses P-file information and a limited amount of E-file information through use of the before-­follow- up match codes. This approach could result in too many imputed non­ matches. The distinction between the E-file and the P-file is unnecessary: Both files are datasets relative to the same block cluster, so they cover groups of people that overlap substantially—i.e., they are characteristics for primarily the same set of people. Although treating the imputation of match status and person characteristics as two separate problems is simpler, it violates the principle that imputations should be multivariate and appears to make inferior use of the information available. If there is some computational advantage to separating the problems in a sequential approach, one can still obtain the benefits of a full multivariate approach by conditioning as one proceeds from the first problem to the second. A crucial assumption underlying the above is that the missing data are missing at random. Let R represent the missing data pattern that is observed, and let V = {X, Z} be the variables that are conditioned on when using an imputation method. Then the assumption is specifically that: ( P ) P ( ) Pr M , Xmis V, R = Pr M , Xmis V (M missing), and ( Pr X P mis ) ( M , V, R = Pr X P mis ) V, M (M observed and conditioned). Conceptually, the first equation depicts that, in imputing for logistic regression modeling, which entails representing the relationship between P M and Xmis , conditioning on the missing data pattern adds nothing when already conditioning on V. The second equation depicts that in imputing for missing covariates, conditioning on the missing data pattern adds n ­ othing when already conditioning on M and V. These formulas illustrate one of the underlying principles, which is that by conditioning on V, the assumption of missing at random is weaker than the assumption with­ out this conditioning. (The stronger assumption is missing completely at random, which is that the missing values have the same distribution

108 COVERAGE MEASUREMENT IN THE 2010 CENSUS as the observed values, without any conditioning. This assumption is extremely unlikely to obtain in either missing data problem.) If the miss­ ing at random assumption is not considered reasonable, one could impute M by conditioning on various aspects of R (referred to as pattern-mixture models; for details, see Little, 1993). It would be valuable for the Census Bureau to assess its current impu­ tation methods in its coverage measurement models for consistency with the above principles. As noted above, the logistic regression approach for modeling match status seems too focused on the P-file data, ignoring potentially useful information both in auxiliary data used in the match­ ing algorithm and in the E-file. It may be that after this reconsideration, modest adjustments to the current procedures will provide a model for match status with smaller mean-squared error under a variety of realistic models for both the generation of data and missing values. The Census Bureau’s current imputation methodology, hot-deck imputation, works well in situations with limited covariate information. However, the difficulty in this approach is that dealing with more than a few covariates at a time compromises its ability to condition on all relevant variables. In contrast, parametric multiple imputation methods make better use of covariate information, and these methods can be used to estimate the contribution to variance as a result of the missing data. An example of this is IVEWARE (see, e.g., Raghunathan et al., 2002). Another question involves the role of imputation for missing census characteristics values. After estimating the logistic regression of M on X, imputations for missing census characteristics are needed to provide the predictors for input to the logistic regression models to estimate a match probability for these cases, through: ( E ) E ( ˆE Pr M Xobs ≈ Pr M Xobs , Xmis ) as input into the small-domains estimation procedure to be used in 2010. Use of hot-deck imputation here is reasonable, but an alternative is to esti­ mate this probability directly given the observed E-sample characteristics, ( E Pr M Xobs . ) This approach avoids the additional uncertainty from the imputation, and it should be straightforward to employ with the move to use of logistic regression. Finally we note that the coverage measurement data collected in 2010, in particular the various follow-up data collections that are typically car­ ried out, could be used to validate the imputation models used, though the sparseness of these samples may make this of only limited utility.

TECHNICAL ISSUES 109 In summary, missing data methodology needs to be viewed in the context of the complete-data problem, file matching. As noted above: • Imputation is only useful if it adds information to the logistic regression; otherwise cases can be dropped. • Imputations should be multivariate in order to preserve associa­ tions between missing variables. • Imputations should condition on predictive covariates. For example, imputations should condition on M if M is observed, and imputations should condition on potential covariate information from matches or potential matches from the E-file. Some form of weighting might be developed to reflect the strength of the potential matches. The Census Bureau could also consider parametric multiple imputation as an alternative to the hot deck because it makes better use of the covariate information and because it propagates imputation uncertainty. Finally, the Census Bureau could also consider nonignorable models, such as pattern-mixture models, if the missing-at-random assumption is likely to be violated. This is a set of research problems that the Census Bureau needs to allocate substantial staff resources to address. We believe that the benefits are likely to be considerable and the understanding from the P-sample matching problem discussed in detail should be transferable to some of the other missing data problems listed on p. 103. The Census Bureau should identify missing data methods that are consistent with the philoso­ phy that is articulated above and implement those methods in support of statistical models of Census Coverage Measurement data in 2010. Recommendation 7: The Census Bureau should develop missing data techniques, in collaboration with external experts if needed, that preserve associations between imputed and observed variables, condition on variables that are predictive of the missing values, and incorporate imputation uncertainty into estimates of standard errors. These ideas should be utilized in modeling the census cover- age measurement data collected in the 2010 census. Matching cases with minimal information For an E-sample enumeration to have sufficient information for match­ ing and follow-up, as defined in the 2000 census, it needed to include a person’s complete name and two other nonimputed characteristics. To be data defined in the census itself, an enumeration simply had to have two non-imputed characteristics. In the A.C.E. E-sample in 2000, 1.7 percent

110 COVERAGE MEASUREMENT IN THE 2010 CENSUS (4.8 million sample survey weighted) of the data-defined enumerations had insufficient information for matching and follow-up. These cases were coded as “KE” cases in A.C.E. processing. A.C.E. estimation treated KEs as having insufficient information for matching, and they were removed from the census enumerations prior to dual-systems computations. If KEs are similar in all important respects to census enumerations with sufficient information for matching, removal from dual-systems computations slightly increases the variance of the resulting estimates, but it does not greatly affect the estimates themselves. Removal of KEs helped to avoid counting people twice because matches for these cases are difficult to ascertain. Also, it was difficult to follow up these E-sample cases to determine their match status if they were initially not matched to the P-sample because of the lack of information about whom to interview. However, some unknown and possibly a large fraction of these cases were correct enumerations. Therefore, removing these cases from the match­ ing inflated the estimate of erroneous enumerations, and it also inflated the estimate of the number of census omissions by about the same amount, since roughly the same number that are correct enumerations would have matched to P-sample enumerations. (There is no way of validating this assumption since the KEs generally cannot be followed up.) Given that the emphasis in 2000 was on the estimation of net census error, this ­inflation of the estimates of the rates of erroneous enumeration and omission was of only minor concern. However, with the new focus in 2010 on estimates of components of census coverage error, there is a greater need to find ­alternative methods for treating KE enumerations. One possibility that the Census Bureau has explored is whether many of these cases can be matched to the P-sample data using information from other household members. To examine this possibility, the Census Bureau carried out an analy­ sis using 2000 census data on 13,360 unweighted data-defined census records with insufficient information for matching to determine whether their match status could be reliably determined. (For details, see Auer, 2004; Shoemaker, 2005.) This clerical operation used name, date of birth, household composition, address, and other characteristics to match the cases to the P-sample. For the 2000 A.C.E. data, 44 percent of the KE cases examined were determined to match to a person who lived at the same address on Census Day and was not otherwise counted, with either “high confidence” or “medium confidence” (which were reasonable and objec­ tively defined categories of credibility). For the 2000 census, this would have reclassified more than 2 million census enumerations from errone­ ous to correct enumerations, as well as a similar number from P-sample omissions to matches, thereby greatly reducing the estimated number

TECHNICAL ISSUES 111 of census component coverage errors.11 (We note that it is important in carrying out this research to remain evenhanded in evaluating whether a case does or does match; this is not simply an effort to identify more cases that are matches.) The treatment of the KEs remaining after this revisiting of the defini­ tion of insufficient information for matching can be viewed as another component of “error” in the same way that a person incorrectly geocoded is an error—that is, as a problem for processing but not a part of what one would call an omission or an erroneous enumeration. Therefore, the use of the term “erroneous enumeration” for these cases is inappropriate. Cases with insufficient information should be treated as having unknown or uncertain enumeration or match status and the term “erroneous” should be reserved for incorrect enumerations. The terminology used needs to distinguish between types of error and the uncertainty associated with these types of error for particular cases. The panel is impressed with the findings of this research, which should substantially improve the assessment of components of census coverage error in 2010. In considering further development of the idea, it would be useful to try to find out more about any characteristics associ­ ated with KEs in order to find out how to reduce their occurrence in the first place. StARS might be useful for this purpose. Furthermore, the cleri­ cal operation used to determine the status of KEs was resource intensive, and it would be useful to try to automate some of the matching to reduce the size of this clerical operation in 2010. We anticipate that, as a result of this research, the Census Bureau will adopt a different standard of what is considered to be insufficient infor­ mation for matching more generally. DEMOGRAPHIC ANALYSIS Demographic analysis may be facing a very dynamic period in the next few years for several reasons. First, nearly all record systems are becoming increasingly more complete with higher quality data. Second, the American Community Survey is now providing a great deal of useful, subnational information that could be used to improve and extend demo­ graphic analysis estimates. Third, StARS, a merged, unduplicated list of U.S. residents and addresses, is also a likely source of information on the number of housing units and residents at small levels of geographic aggregation that could also be used to improve demographic analysis estimates. 11  For the remaining unresolved cases, the Census Bureau currently plans to treat them in a separate category as “enumerations unable to evaluate.”

112 COVERAGE MEASUREMENT IN THE 2010 CENSUS At the same time, some things are becoming more complicated, nota­ bly, the expansion of the number of race and ethnicity categories on the decennial census and the growing and increasingly mobile population of undocumented immigrants. In this context, the panel was asked to examine how demographic analysis might function more effectively as an independent assessment of the quality of the coverage of the decennial census. In addition, the panel was asked to consider the use of sex ratios from demographic analysis, especially for Hispanic residents, to reduce the effect of correlation bias in dual-systems estimation. As described above, the basic demographic analysis equation is ˆ ˆ P NEW = POLD + B − D + I − E , ij ij ij ij ij ij where Pijˆ NEW represents the current estimate of the population for demo­ ˆ OLD graphic group i and geographic area j, Pij is the analogous estimate for a previous census, Bij represents the number of births between the current and a previous census, Dij represents the number of deaths between the current and a previous census, Iij represents the number of immigrants between the current and a previous census, and Eij ­represents the number of emigrants between the current and a previous census, all for demo­ ˆ graphic group i and geographic area j. Once PijNEW is computed, the net ˆ census undercount, U ij for demographic group i and area j is defined ˆ ˆ as U ij = PijNEW − Cij , where Cij is the census count, again for demographic group i and area j. Error is introduced into estimates from demographic analysis due to omissions in the birth and death records and due to large inaccuracies in the data on immigration and emigration. The error in net undercoverage estimates from demographic analysis then stems from error in the various components, error in the census counts, and any lack of alignment of the demographic categories. Given these concerns, the most reliable outputs from demographic analysis are any national counts by age and sex, and functions of such counts, in particular sex ratios by age; birth and death estimates; and historical patterns of various kinds. More problematic outputs are race (depending on the degree of alignment to the new race/ethnicity catego­ ries) and subnational estimates for demographic groups. The most problematic outputs are estimates of international migration components, estimates of the Hispanic population, subnational totals for states and smaller geographic areas. The Census Bureau plans for demographic analysis in 2010 are to produce “estimates” and “benchmarks,” with estimates represented to users as being more reliable than benchmarks. The Census Bureau will produce estimates of national level totals by year of age and by sex, and

TECHNICAL ISSUES 113 estimates of 2000–2010 change for the above groups. The Census Bureau will also produce benchmarks of national net undercount error by age, sex, and race/ethnicity. In addition, the demographic analysis program will produce sex ratios by age and race/ethnic origin, possibly for use in reducing the effects of correlation bias on estimates of net undercoverage from the census coverage measurement program. Even without any major advances from 2000, demographic analysis will still likely play an important role in evaluation of the 2010 census. As pointed out above, demographic analysis provided an early indication that the initial estimates of the total U.S. population from A.C.E. may have been too high, and it will continue to provide an estimated count that serves as a useful estimate for many demographic groups and a useful lower bound for others. The Census Bureau is currently pursuing important research direc­ tions, though it is unclear whether they will contribute to the 2010 demo­ graphic analysis program. Those research plans include: (1) improved estimation of international migration, (2) estimation of the uncertainty of demographic analysis estimates, and (3) progress towards the production of subnational estimates. The latter includes research on methods and data sources, with some pieces already considered of possibly acceptable quality, such as estimates of the number of people younger than 10 years of age at the state level. We believe that these are extremely important projects to pursue and deserve full support from the Census Bureau. In addition, the panel has the following questions concerning the 2010 demographic analysis estimates that may help orient these research avenues: • Given that there is race/ethnicity incomparability between the decennial census and demographic analysis, which categories are going to be used in 2010? • Given overlapping data for some cohorts (e.g., Medicare informa­ tion for those over 65) in comparison with standard demographic analysis, which sources will be used and how will that be deter­ mined? Will there be efforts to combine information? • Estimates of Hispanic origin were produced by the censuses of 1980, 1990, and 2000, as were adjusted counts. Have these sequences been examined to determine their likely quality over time? • In considering subnational estimates, relatively high-quality esti­ mates are available of the number of native-born children under 10 years old at the state level, and the number over 65 from Medicare, again at the state level. Given additional information on interstate migration from tax returns, school enrollment, and possibly the American Community Survey, could high-quality

114 COVERAGE MEASUREMENT IN THE 2010 CENSUS estimates be provided for the remaining demographic groups at the state level? • If the Census Bureau again uses sex ratios from demographic anal­ ysis to reduce the correlation bias in adjusted population counts, should these be applied for all minority men or selectively, as in 2000? • The American Community Survey is providing information that might be extremely useful for improving demographic analysis esti­ mates. The possibilities include: (1) better estimates of the number of foreign-born residents, (2) better estimation of net international migration, and (3) information on sex ratios for more detailed ethnic and racial groups. How should each of these information sources be best used to improve demographic analysis, and what evaluations should be used to support decisions of implementation? • Measurement of the size of the undocumented population is a con­ tinuing problem for demographic analysis. The current method, described in Passel (2005), is, roughly speaking, to subtract the estimated size of the legal immigrant population from the esti­ mated size of the total foreign-born population. Are there any new methods that might be more effective in estimating the size of this population? • StARS is already, or will soon be, of high enough quality to provide useful input into demographic analysis estimates. There are ­reasons to believe that administrative records could play an important role in improving various aspects of demographic analysis, especially the counting of immigration and emigration, and research in this area would be very desirable. Demographic Analysis in Combination with Dual-Systems Estimation As part of A.C.E. revision II, the Census Bureau decided to modify the final A.C.E. estimates based on sex ratios from demographic analysis and the assumption that the A.C.E. counts for women and children were correct. Specifically, at the level of aggregate poststrata (aggregated over nondemographic and geographic characteristics), the A.C.E. counts for black men 18 and over and for all other males 30 and over were adjusted upward so that the ratio of women to men for A.C.E. (essentially) agreed with that estimated using demographic analysis. The argument in support of this joint use of demographic analysis and dual-systems estimation is as follows. Demographers generally believe that the most accurate outputs of demographic analysis are national- level sex ratios by age for blacks and nonblacks. Even if absolute counts are subject to some bias, sex ratios are expected to be quite accurate.

TECHNICAL ISSUES 115 Historically, at least for adult blacks, the corresponding male-to-female ratios based on adjusted counts using dual-systems estimation have been lower than those from demographic analysis, suggesting that correlation bias (or other sources of bias) result in relative underestimation of adult males by dual-systems estimation. Because the most obvious source of correlation bias (heterogeneity of enumeration probabilities) would not have resulted in a negative bias for dual systems estimates, the most conservative step, in terms of additional counts, is to leave estimates for the female population unchanged and to increase the male population enough so that the resulting sex ratios for the adjusted counts agree with those from demographic analysis. It is not sufficient to simply add these additional enumerations at the level of the aggregate poststrata; they must then be allocated down to the poststrata within each of the aggregate poststrata. Bell (1993) and Bell et al. (1996) identified five different methods for doing so, but there is little evidence available as to which of the methods works best. The Census Bureau selected one of these five approaches on the basis of its best judg­ ment, but the arbitrariness of the selection, along with the fact that the counts were sensitive to the method used, is troubling. Also, given the limitations of demographic analysis, this technique could not be applied to such particular subgroups as nonblack men aged 30 and over (espe­ cially Hispanics), despite some historical evidence that a similar correc­ tion might have improved estimates for those subgroups. Finally, adjusted counts for both adult males and females have rested on the assumption that there is no correlation bias for adult females. Admittedly, the approach used resulted in higher “face validity” for the adjusted census counts at the aggregate level as a result of the consis­ tency with the sex ratios from demographic analysis. However, given the issues described above, especially the lack of a formal assessment of the effect of this process on the quality of the resulting counts, the decision was controversial. Given this situation, it seems reasonable to carry out a more com­ prehensive evaluation of what was done in 2000 and possible alterna­ tives before adopting a similar modification in 2010. (The Census Bureau currently plans to use a similar technique in 2010 as a correction for c ­ orrelation bias.) Artificial population studies, in which models are devel­ oped to designate which individuals in an artificial population are and are not missed by the census, the PES, and by the record systems used by demographic analysis could be useful in such evaluations. We suggest that the Census Bureau include the approach described by Elliott and Little (2000) in their analysis of the method used in 2000. Their approach provides useful smoothing to the technique described in Bell (1993) and Bell et al. (1996). In addition to the beneficial smoothing, Elliott and

116 COVERAGE MEASUREMENT IN THE 2010 CENSUS L ­ ittle’s work provides estimates of precision that incorporate the uncer­ tainty in the demographic analysis sex ratios. In addition, the information from the American Community Survey and from StARS on various demographic statistics, such as sex ratios, could be considered for use in providing not only modifications to the counts for males, but also modifications to the counts for females, avoid­ ing the necessity of relying on the assumption that no correlation bias exists for that demographic group. Estimation of Uncertainty of Demographic Analysis The Census Bureau (see Robinson et al., 1993) conducted initial research on developing uncertainty intervals for population forecasts, but to date these have not been fully developed. Development of such uncer­ tainty intervals would have two benefits: users would be supplied with uncertainty intervals with a formal probabilistic interpretation, and esti­ mates from demographic analysis could be combined with estimates from independent sources by weighting by the precision of each estimate. In the past 15 years, a number of researchers have suggested interest­ ing methods to consider for development of uncertainty intervals. Poole and Raftery (2000) suggest the use of Bayesian melding for this purpose. Briefly, the idea is that one has expert knowledge about inputs to a deter­ ministic model and their variability (i.e., a prior distribution) and expert knowledge about the outputs of interest (the forecasts), which through exact or approximate inversion presents a second prior distribution for the inputs. These two prior distributions then have to be reconciled. There is also the most recent data for the inputs that have been collected, and one can develop likelihoods for the previous inputs and outputs given the data. Bayes rule is then used, implemented by the sampling-importance- resampling algorithm of Rubin (1988), to update the prior distribution to produce a posterior distribution of the forecasts, which would include a posterior variance. Other approaches have also been suggested by, among others, Alho and Spencer (1997) and Lee and Tuljapurkar (1994). Given all of this promising research, and the benefits from the development of uncertainty intervals, it would be valuable for the Census Bureau to revisit this issue and evaluate some of these approaches for their applicability to demo­ graphic analysis of the U.S. census. It is true that the U.S. census tends to have idiosyncratic challenges each decade, such as the number of undocu­ mented immigrants that are enumerated in a given census, the number of duplicate enumerations from multiple modes of enumeration, or the degree of census undercoverage, and these challenges may be difficult to model. Therefore, in particular, the specific stochastic models suggested

TECHNICAL ISSUES 117 by Alho and Spencer (1997) and Lee and Tuljapurkar (1994) might need some modification. However, even recognizing this, if started now, the panel is confident that a research effort devoted to this issue would very likely produce useful uncertainty intervals for the 2010 census. In summary, demographic analysis played an important role in help­ ing to evaluate the estimates produced by A.C.E. in 2000, and it can play an even larger role in 2010 and 2020, especially if some improvements are implemented. Those improvements include improving the measurement of undocumented and documented immigration, development of sub­ national geographic estimates, development of estimates of uncertainty, and further refining methods for combining demographic analysis and coverage measurement survey information. Recommendation 8: The Census Bureau should give priority to research on improving demographic analysis in the four areas: (1) improving the measurement of undocumented and documented immigrants, (2) development of subnational geographic estimates, (3) assessment of the uncertainty of estimates from demographic analysis, and (4) refining methods for combining estimates from demographic analysis and postenumeration survey data.

Next: 5 Analytic Use of Coverage Measurement Data »
Coverage Measurement in the 2010 Census Get This Book
×
Buy Paperback | $56.00 Buy Ebook | $44.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The census coverage measurement programs have historically addressed three primary objectives: (1) to inform users about the quality of the census counts; (2) to help identify sources of error to improve census taking, and (3) to provide alternative counts based on information from the coverage measurement program.

In planning the 1990 and 2000 censuses, the main objective was to produce alternative counts based on the measurement of net coverage error. For the 2010 census coverage measurement program, the Census Bureau will deemphasize that goal, and is instead planning to focus on the second goal of improving census processes.

This book, which details the findings of the National Research Council's Panel on Coverage Evaluation and Correlation Bias, strongly supports the Census Bureau's change in goal. However, the panel finds that the current plans for data collection, data analysis, and data products are still too oriented towards measurement of net coverage error to fully exploit this new focus. Although the Census Bureau has taken several important steps to revise data collection and analysis procedures and data products, this book recommends further steps to enhance the value of coverage measurement for the improvement of future census processes.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!