National Academies Press: OpenBook

The 2000 Census: Counting Under Adversity (2004)

Chapter: Appendix E: A.C.E. Operations

« Previous: Appendix D: Completeness of Census Returns
Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Appendix E

A.C.E. Operations

This appendix describes the operations of the original 2000 Accuracy and Coverage Evaluation (A.C.E.) Program, which produced population estimates in March 2001.1 Differences from the analogous 1990 Post-Enumeration Survey (PES) are summarized in Chapter 5, which also describes the dual-systems estimation (DSE) method used to develop population estimates for poststrata from the A.C.E. results. Chapter 6 describes the differences in estimation methods used for the A.C.E. Revision II results, which were made available in March 2003. This appendix covers six topics: sampling, address listing, and housing unit match (E.1); P-sample interviewing (E.2); initial matching and targeted extended search (E.3); field follow-up and final matching (E.4); weighting and imputation (E.5); and poststrata estimation (E.6).

E.1 SAMPLING, ADDRESS LISTING, AND HOUSING UNIT MATCH

The 2000 A.C.E. process began in spring 1999 with the selection of a sample of block clusters for which an independent listing of addresses was carried out in fall 1999. The selection process was designed to balance such factors as the desired precision of the DSE

1  

See Childers and Fenstermaker (2000) and Childers (2000) for detailed documentation of A.C.E. procedures.

Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

estimates, not only for the total population, but also for minority groups, and the cost of field operations for address listing and subsequent interviewing. In addition, the A.C.E. selection process had to work within the constraints of the design originally developed for integrated coverage measurement (ICM).

E.1.a First-Stage Sampling and Address Listing of Block Clusters

Over 3.7 million block clusters were formed that covered the entire United States, except remote Alaska.2 Each cluster included one census collection block or a group of geographically contiguous blocks, in which the block(s) were expected to be enumerated using the same procedure (e.g., mailout/mailback) and to contain, on average, about 30 housing units on the basis of housing unit counts from an early version of the 2000 Master Address File (MAF). The average cluster size was 1.9 blocks.

Next, clusters were grouped into four sampling strata: small (0–2 housing units), medium (3–79 housing units), large (80 or more housing units), and American Indian reservations (in states with sufficient numbers of American Indians living on reservations). Systematic samples of block clusters were selected from each stratum using equal probabilities, yielding about 29,000 block clusters containing about 2 million housing units, which were then visited by Census Bureau field staff to develop address lists.

The sample at this stage was considerably larger than that needed for the A.C.E. The reason was that the Census Bureau had originally planned to field a P-sample of 750,000 housing units for use in ICM, and there was not time to develop a separate design for the planned A.C.E. size of about 300,000 housing units. So the ICM block cluster sample design was implemented first and then block clusters were subsampled for A.C.E., making use of updated information from the address listing about housing unit counts.3

2  

A.C.E. operations were also conducted in Puerto Rico; the Puerto Rico A.C.E. is not discussed here.

3  

Our panel reviewed this decision and found it satisfactory because the development of direct dual-systems estimates for states was not necessary in the A.C.E. as it would have been under the ICM design (National Research Council, 1999a, reprinted in Appendix A.4.a).

Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
E.1.b Sample Reduction for Medium and Large Block Clusters

After completion of the address listing and an update of the MAF, the number of medium and large block clusters was reduced, using differential sampling rates within each state. Specifically, medium and large clusters classified as minority on the basis of 1990 data were oversampled to improve the precision of the DSE estimates for minority groups. Also, clusters with large differences in housing unit counts from the P-sample address list and the January 2000 version of the MAF were oversampled in order to minimize their effect on the variance of the DSE estimates.

E.1.c Sample Reduction for Small Block Clusters

The next step was to stratify small block clusters by size, based on the current version of the MAF, and sample them systematically with equal probability at a rate of 1 in 10. However, all small block clusters that were determined to have 10 or more housing units and all small block clusters on American Indian reservations, in other American Indian areas, or in list/enumerate areas were retained. After completion of the cluster subsampling operations, the A.C.E. sample totaled about 11,000 block clusters.

E.1.d Initial Housing Unit Match

The addresses on the P-sample address listing were matched with the MAF addresses in the sampled block clusters. The purpose of this match was to permit automated subsampling of housing units in large blocks for both the P-sample and the E-sample and to identify nonmatched P-sample and E-sample housing units for field follow-up to confirm their existence. Possible duplicate housing units in the P-sample or E-sample were also followed up in the field. When there were large discrepancies between the housing units on the two samples, indicative of possible geocoding errors, the block clusters were relisted for the P-sample.

E.1.e Last Step in Sampling: Reduce Housing Units in Large Block Clusters

After completion of housing unit matching and follow-up, the final step in developing the P-sample was to subsample segments of

Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

housing units on the P-sample address list in large block clusters in order to reduce the interviewing workload. The resulting P-sample contained about 301,000 housing units. Subsequently, segments of housing units in the census were similarly subsampled from large block clusters in order to reduce the E-sample follow-up workload. For cost reasons, the subsampling was done to maximize overlapping of the P-sample and E-sample. Table E.1 shows the distribution of the P-sample by sampling stratum, number of block clusters, number of housing units, and number of people.

E.2 P-SAMPLE INTERVIEWING

The goal of the A.C.E. interviewing of P-sample households was to determine who lived at each sampled address on Census Day, April 1. This procedure required that information be obtained not only about nonmovers between Census Day and the A.C.E. interview day, but also about people who had lived at the address but were no longer living there (outmovers). In addition, the P-sample interviewing ascertained the characteristics of people who were now living at the address but had not lived there on Census Day (inmovers).

The reason for including both inmovers and outmovers was to implement a procedure called PES-C, in which the P-sample match rates for movers would be estimated from the data obtained for outmovers, but these rates would then be applied to the weighted number of inmovers. The assumption was that fewer inmovers would be missed in the interviewing than outmovers, so that the number of inmovers would be a better estimate of the number of movers. PES-C differed from the procedure used in the 1990 PES (see Section 5-D.1).

It was important to conduct the P-sample interviewing as soon as possible after Census Day, so as to minimize errors by respondents in reporting the composition of the household on April 1 and to be able to complete the interviewing in a timely manner. However, independence of the P-sample and E-sample could be compromised if A.C.E. interviewers were in the field at the same time as census nonresponse follow-up interviewers. An innovative solution for 2000 was to conduct the first wave of interviewing by telephone, using a computerized questionnaire. Units that were eligible for

Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Table E.1 Distribution of the 2000 A.C.E P-Sample Block Clusters, Households, and People, by Sampling Stratum (unweighted)

 

Block Clusters

Households

People

Average Households per Block Cluster

Sampling Stratum

Number

Percent

Number

Percent

Number

Percent

Small Block Clusters (0–2 housing units)

446

4.4

3,080

1.2

7,233

1.1

6.9

Medium Block Clusters (3–79 housing units)

5,776

57.6

146,265

56.7

386,556

57.8

25.3

Large Block Clusters (80 or more housing units)

3,466

34.6

102,286

39.6

253,730

38.0

29.5

Large and Medium Block Clusters on American Indian Reservations

341

3.4

6,449

2.5

21,018

3.1

18.9

Total

10,029

100.0

258,080

100.0

668,537

100.0

25.7

NOTES: Block clusters are those in the sample after all stages of sampling that contained one or more P- sample cases; households are those that contain at least one valid nonmover or inmover; people are valid nonmovers and inmovers. Outmovers are not included, nor are people that were removed from the sample.

SOURCE: Tabulations by panel staff from P-Sample Person Dual-System Estimation Output File (U.S. Census Bureau, 2001b), provided to the panel, February 16, 2001.

Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

telephone interviewing included occupied households for which a census questionnaire (either a mail or an enumerator-obtained return) had been captured that included a telephone number, had a city-style address, and was either a single-family home or in a large multiunit structure. Units in small multiunit structures or with no house number or street name on the address were not eligible for telephone interviewing. Telephone interviewing began on April 23, 2000, and continued through June 11. Fully 29 percent of the P-sample household interviews were obtained by telephone, a higher percentage than expected.

Interviewing began in the field the week of June 18, using laptop computers. Interviewers were to ascertain who lived at the address currently and who had lived there on Census Day, April 1. The computerized interview—an innovation for 2000—was intended to reduce interviewer variance and to speed up data capture and processing by having interviewers send their completed interviews each evening over secure telephone lines to the Bureau’s main computer center, in Bowie, Maryland.

For the first 3 weeks, interviewers were instructed to speak only with a household resident; after then, they could obtain a proxy interview from a nonhousehold member, such as a neighbor or landlord. (Most outmover interviews were by proxy.) During the last two weeks of interviewing, the best interviewers were sent to the remaining nonrespondents to try to obtain an interview with a household member or proxy. Of all P-sample interviewing, 99 percent was completed by August 6; the remaining 1 percent of interviews were obtained by September 10 (Farber, 2001b:Table 4.1).

E.3 INITIAL MATCHING AND TARGETED EXTENDED SEARCH

After the P-sample interviews were completed, census records for households in the E-sample block clusters were drawn from the census unedited file; census enumerations in group quarters (e.g., college dormitories, nursing homes) were not part of the E-sample. Also excluded from the E-sample were people with insufficient information (IIs), as they could not be matched, and late additions to the census whose records were not available in time for matching. People with insufficient data lacked reported information for at

Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

least two characteristics (among name, age, sex, race, ethnicity, and household relationship); computer imputation routines were used to complete their census records. Census terms for these people are “non-data-defined,” “whole person-allocations,” and “substitutions;” we refer to them in this report as “whole-person imputations.” In 2000, there were 5.8 million people requiring imputation, as well as 2.4 million late additions (reinstated records) due to the special operation to reduce duplication in the MAF in summer 2000 (see Section 4-E).

For the P-sample, nonmovers and outmovers were retained in the sample for matching, as were people whose residence status was not determined. Inmovers or people clearly identified from the interview as not belonging in the sample (e.g., because they resided in a group quarters on Census Day) were not matched.

E.3.a E-Sample and P-Sample Matching Within Block Cluster

Matching was initially performed by a computer algorithm, which searched within each block cluster and identified clear matches, possible matches, nonmatches, and P-sample or E-sample people lacking enough reported data for matching and follow-up. (For the A.C.E., in addition to meeting the census definition of data defined, each person had to have a complete name and at least two other characteristics). Clerical staff next reviewed possible matches and nonmatches, converting some to matches and classifying others as lacking enough reported data, as erroneous (e.g., duplicates within the P-sample or E-sample, fictitious people in the E-sample), or (when the case was unclear or unusual) as requiring higher-level review.4 The work of the clerical staff was greatly facilitated by the use of a computerized system for searching and coding (see Childers et al., 2001).

On the P-sample side, the clerks searched for matches within a block cluster not only with E-sample people, but also with non-E-sample census people. Such people may have been in group

4  

Duplicates in the E-sample were classified as erroneous enumerations; duplicate individuals in a P-sample household with other members were removed from the final P-sample; whole-household duplications in the P-sample were treated as household noninterviews.

Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

quarters or in enumerated housing units in the cluster that were excluded when large block clusters were subsampled.

E.3.b Targeted Extended Search

In selected block clusters, the clerks performed a targeted extended search (TES) for certain kinds of P-sample and E-sample households (see Navarro and Olson, 2001). The search looked for P-sample matches to census enumerations in the ring of blocks adjacent to the block cluster; it also looked for E-sample correct enumerations in the adjacent ring of blocks. The clerks searched only for those cases that were whole household nonmatches in certain types of housing units. The purpose was to reduce the variance of the DSE estimates due to geocoding errors (when a housing unit is coded incorrectly to the wrong census block). Given geocoding errors, it is likely that additional P-sample matches and E-sample correct enumerations will be found when the search area is extended to the blocks surrounding the A.C.E.-defined block cluster.

Three kinds of clusters were included in TES with certainty: clusters for which the P-sample address list was relisted; 5 percent of clusters with the most census geocoding errors and P-sample address nonmatches; and 5 percent of clusters with the most weighted census geocoding errors and P-sample address nonmatches. Clusters were also selected at random from among those clusters with P-sample housing unit nonmatches and census housing units identified as geocoding errors. About 20 percent of block clusters were included in the TES sample. Prior to matching, field work was conducted in the TES clusters to identify census housing units in the surrounding ring of blocks.

Only some cases in TES block clusters were included in the extended clerical search. These cases were P-sample nonmatched households for which there was no match to an E-sample housing unit address and E-sample cases identified as geocoding errors. When an E-sample geocoding error case was found in an adjacent block, there was a further search to determine if it duplicated another housing unit or was a correct enumeration.

Following the clerical matching and targeted extended search, a small, highly experienced staff of technicians reviewed difficult

Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

cases and other cases for quality assurance. Then a yet smaller analyst staff reviewed the cases the technicians could not resolve.

E.4 FIELD FOLLOW-UP AND FINAL MATCHING

Matching and correct enumeration rates would be biased if there were not a further step of follow-up in the field to check certain types of cases. On the E-sample side, almost all cases that were assigned a nonmatch or unresolved code by the computer and clerical matchers were followed up, as were people at addresses that were added to the MAF subsequent to the housing unit match. The purpose of the person follow-up was to determine if these cases were correct (nonmatching) enumerations or erroneous.

On the P-sample side, about half of the cases that were assigned a nonmatch code and most cases that were assigned an unresolved code were followed up in the field. The purpose was to determine if they were residents on Census Day and if they were a genuine nonmatch. Specifically, P-sample nonmatches were followed up when they occurred in: a partially matched household; a whole household that did not match a census address and the interview was conducted with a proxy respondent; a whole household that matched an address with no census person records and the interview was conducted with a proxy; or a whole household that did not match the people in the E-sample for that household. In addition, P-sample whole household nonmatches were followed up when: an analyst recommended follow-up; when the cluster had a high rate of P-sample person nonmatches (greater than 45 percent); when the original interviewer had changed the address for the household; and when the cluster was not included in the initial housing unit match (e.g., list/enumerate clusters, relisted clusters).

The field follow-up interviews were conducted with a paper questionnaire, and interviewers were instructed to try even harder than in the original interview to speak with a household member. After field follow-up, each P-sample and E-sample case was assigned a final match and residence status code by clerks and, in some cases, technicians or analysts.

Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

E.5 WEIGHTING AND IMPUTATION

The last steps prior to estimation were to:5

  • weight the P-sample and E-sample cases to reflect their probabilities of selection;

  • adjust the P-sample weights for household noninterviews;

  • impute missing characteristics for P-sample persons that were needed to define poststrata (e.g., age, sex, race); and

  • impute residence and/or match status to unresolved P-sample cases and impute enumeration status to unresolved E-sample cases.

Weighting is necessary to account for different probabilities of selection at various stages of sampling. Applying a weight adjustment to account for household noninterviews is standard survey procedure, as is imputation for individual characteristics. The assumption is that weighting and imputation procedures for missing data reduce the variance of the estimates, compared with estimates that do not include cases with missing data, and that such procedures may also reduce bias, or at least not increase it.

For the P-sample weighting, an initial weight was constructed for housing units that took account of the probabilities of selection at each phase of sampling. Then a weighting adjustment was performed to account for household noninterviews. Two weight adjustments were performed, one for occupied households as of the interview day and the other for occupied households as of Census Day. The adjusted interview day weight was used for inmovers; the adjusted Census Day weight, with a further adjustment for the targeted extended search sampling, was used for nonmovers and outmovers. E-sample weighting was similar but did not require a household noninterview adjustment. Table E.2 shows the distribution of P-sample and E-sample weights.6

Item imputation was performed separately for each missing characteristic on a P-sample record. The census editing

5  

Cantwell et al. (2001) provide details of the noninterview adjustment and imputation procedures used.

6  

The weights were trimmed for one outlier block cluster.

Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Table E.2 Distribution of Initial, Intermediate, and Final Weights, 2000 A.C.E.P-Sample and E-Sample

 

 

Percentile of Weight Distribution

Sample and Mover Status

Number of Non-Zeros

0

1

5

10

25

50

75

90

95

99

100

P- Sample

 

Initial Weighta

 

Total

721,734

9

21

48

75

249

352

574

647

654

661

1,288

Nonmovers

631,914

9

21

48

76

253

366

575

647

654

661

1,288

Outmovers

24,158

9

21

48

69

226

348

541

647

654

661

1,288

Inmovers

36,623

9

21

47

67

212

343

530

647

654

661

1,288

Intermediate Weightb

 

Total with Census Day Weight

712,442

9

22

49

78

253

379

577

654

674

733

1,619

Total with Interview Day Weight

721,426

9

21

48

76

249

366

576

651

660

705

1,701

Final Weightc

 

Census Day Weight

 

Total

640,795

9

22

50

83

273

382

581

654

678

765

5,858

Nonmovers

617,390

9

22

50

83

274

382

581

654

678

762

5,858

Outmovers

23,405

9

23

50

77

240

363

577

655

682

798

3,847

Inmovers

36,623

9

21

47

67

214

345

530

651

656

705

1,288

E-Sample

 

Initial Weightd

712,900

9

21

39

55

212

349

564

647

654

661

2,801

Final Weighte

704,602

9

21

39

56

217

349

567

647

654

700

4,009

a P-sample initial weight, PWGHT, reflects sampling through large block subsampling; total includes removed cases.

b P-sample intermediate weight, NIWGT, reflects household noninterview adjustment for Census Day; NIWGTI reflects household noninterview adjustment for A.C.E. interview day.

c P-sample final weight, TESFINWT, applies to confirmed Census Day residents, including nonmovers and outmovers (reflects targeted extended search sampling); NIWGTI applies to inmovers.

d E-sample initial weight, EWGHT, reflects sampling through large block subsampling.

e E- sample final weight, TESFINWT, reflects targeted extended search sampling.

SOURCE: Tabulations by panel staff from P-Sample and E-Sample Person Dual-System Estimation Output Files (U.S. Census Bureau, 2001b), provided to the panel, February 16, 2001.

Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

and imputation process provided imputations for missing basic (complete-count) characteristics on the E-sample records (see Appendix G). Finally, probabilities of being a Census Day resident and of matching the census were assigned to P-sample people with unresolved status, and probabilities of being a correct enumeration were assigned to E-sample people with unresolved enumeration status.

E.6 POSTSTRATA ESTIMATION

Estimation of the DSE for poststrata and the variance associated with the estimates was the final step in the A.C.E. process. The poststrata were specified in advance on the basis of research with 1990 census data (see Griffin and Haines, 2000), and each E-sample and P-sample record was assigned to a poststratum as applicable. Poststrata that had fewer than 100 cases of nonmovers and outmovers were combined with other poststrata for estimation. In all, the originally defined 448 poststrata, consisting of 64 groups defined by race/ethnicity, housing tenure, and other characteristics cross-classified by seven age/sex groups (see Table E.3), were reduced to 416, by combining age/sex groups as needed within one of the other poststrata.

Weighted estimates were prepared for each of the 416 poststrata for the following:

  • P-sample total nonmover cases (NON), total outmover cases (OUT), and total inmover cases (IN) (including multiplication of the weights for nonmovers and outmovers by residence status probability, which was 1 for known Census Day residents and 0 for confirmed nonresidents);

  • P-sample matched nonmover cases (MNON) and matched outmover cases (MOUT) (including multiplication of the weights by match status probability, which was 1 for known matches and 0 for confirmed nonmatches);

  • E-sample total cases (E); and

  • E-sample correct enumeration cases (CE) (including multiplication of the weights by correct enumeration status probability).

Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Also tabulated for each poststratum was the census count (C) and the count of IIs (people with insufficient information, including people requiring imputation and late additions). The DSE for each poststratum was calculated as the census count minus IIs, times the correct enumeration rate (CE/E), times the inverse of the match rate,

The match rate (M/P) was calculated for most poststrata by applying the outmover match rate (MOUT/OUT) to the weighted number of inmovers (IN) to obtain an estimate of matched inmovers (MIN), and then solving for

However, for poststrata with fewer than 10 outmovers (63 of the 416), the match rate was calculated as

Procedures were implemented to estimate the variance in the DSE estimates for poststrata. Direct variance estimates were developed for the collapsed poststrata DSEs that took account of the error due to sampling variability from the initial listing sample, the A.C.E. reduction and small block subsampling, and the targeted extended search sampling. The variance estimates also took account of the variability from imputation of correct enumeration, match, and residence probabilities for unresolved cases. Not included in the variance estimation were the effects of nonsampling errors, other than the error introduced by the imputation models. In particular, there was no allowance for synthetic or model error; the variance calculations assume that the probabilities of being included in the census are uniform across all areas in a poststratum (see Starsinic et al., 2001).

Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Table E.3 Poststrata in the Original 2000 A.C.E., 64 Major Groups

Race/Ethnicity Domain

Other Characteristics

1.

American Indian or Alaska Native on Reservationa

2 groups: owner, renter

2.

American Indian or Alaska Native off Reservationb

2 groups: owner, renter

3.

Hispanicc

4 groups for owners:

 

High and low mail return rate

 

 

By type of metropolitan statistical area (MSA) and enumeration area

 

Large and medium-size MSA mailout/mailback areas

 

All other

 

4 groups for renters (see Hispanic owners)

4.

Non-Hispanic Blackd

4 groups for owners (see Hispanic owners)

 

4 groups for renters (see Hispanic owners)

5.

Native Hawaiian or Other Pacific Islandere

2 groups: owner, renter

6.

Non-Hispanic Asianf

2 groups: owner, renter

7.

Non-Hispanic White or Some Other Raceg

32 groups for owners:

 

High and low mail return rate

 

By region (Northeast, Midwest, South, West)

 

By type of metropolitan statistical area and enumeration area:

 

Large MSA, mailout/mailback areas

Medium MSA, mailout/mailback areas

Small MSA and non-MSA, mailout/mailback areas

Other types of enumeration area (e.g., update/leave)

 

 

8 groups for renters:

 

High and low mail return rate

By type of metropolitan statistical area and enumeration area

 

(See owner categories)

All 64 groups were classfied by seven age/sex categories (below) to form 448 poststrata; in estimation, some age/sex categories were combined (always within one of the 64 groups) to form 416 strata.

Under age 18

Men ages 18–29; women ages 18–29

Men ages 30–49; women ages 30–49

Men age 50 and older; women age 50 and older.

Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

NOTES: Large metropolitan statistical areas (MSAs) are the largest 10 MSAs in the United States; medium MSAs are other MSAs with 500,000 or more population; small MSAs are MSAs with less than 500,000 population.

The description of race/ethnicity domains is simplified somewhat; see Haines (2000) for complete set of classification rules (see also Farber, 2001a).

a All people on a reservation with American Indian or Alaska Native as their single or one of multiple races.

b All people in Indian Country not on a reservation with American Indian or Alaska Native as their single or one of multiple races; all non-Hispanic people not in Indian Country with American Indian or Alaska Native as their single race.

c All Hispanic people in Indian Country not already classified in Domain 2; all Hispanic people not in Indian Country except those living in Hawaii with Native Hawaiian or Other Pacific Islander as their single or one of multiple races.

d All non-Hispanic people with Black as their only race; all non-Hispanic people with Black and American Indian or Native Alaska race not in Indian Country; all non-Hispanic people with Black and another single race group, except those living in Hawaii with Black and Native Hawaiian or Other Pacific Islander race.

e All non-Hispanic people with Native Hawaiian or Other Pacific Islander as their only race; all non-Hispanic people with Native Hawaiian or Other Pacific Islander and American Indian or Alaska Native race not in Indian Country; all non-Hispanic people with Native Hawaiian or Other Pacific Islander and Asian race; all people in Hawaii with Native Hawaiian or Other Pacific Islander as their single or one of multiple races.

f All non-Hispanic people with Asian as their only race; all non-Hispanic people with Asian and American Indian or Alaska Native race not in Indian Country.

g All non-Hispanic people with White or some other race as their only race; all non-Hispanic people with White or some other race in combination with American Indian or Alaska Native not in Indian Country; or in combination with Asian; or in combination with Native Hawaiian or Other Pacific Islander not in Hawaii; all non-Hispanic people with three or more races (excluding American Indian or Alaska Native) in Indian Country or outside of Indian Country (excluding Native Hawaiian or Other Pacific Islander in Hawaii).

Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

This page intentionally left blank.

Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 417
Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 418
Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 419
Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 420
Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 421
Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 422
Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 423
Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 424
Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 425
Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 426
Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 427
Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 428
Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 429
Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 430
Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 431
Suggested Citation:"Appendix E: A.C.E. Operations." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 432
Next: Appendix F: Methods for Treating Missing Data »
The 2000 Census: Counting Under Adversity Get This Book
×
Buy Hardback | $80.00 Buy Ebook | $64.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The decennial census was the federal government’s largest and most complex peacetime operation. This report of a panel of the National Research Council’s Committee on National Statistics comprehensively reviews the conduct of the 2000 census and the quality of the resulting data. The panel’s findings cover the planning process for 2000, which was marked by an atmosphere of intense controversy about the proposed role of statistical techniques in the census enumeration and possible adjustment for errors in counting the population. The report addresses the success and problems of major innovations in census operations, the completeness of population coverage in 2000, and the quality of both the basic demographic data collected from all census respondents and the detailed socioeconomic data collected from the census long-form sample (about one-sixth of the population). The panel draws comparisons with the 1990 experience and recommends improvements in the planning process and design for 2010. The 2000 Census: Counting Under Adversity will be an invaluable resource for users of the 2000 data and for policymakers and census planners. It provides a trove of information about the issues that have fueled debate about the census process and about the operations and quality of the nation’s twenty-second decennial enumeration.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!