National Academies Press: OpenBook

The 2000 Census: Counting Under Adversity (2004)

Chapter: 9 Management and Research

« Previous: 8 Race and Ethnicity Measurement
Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

CHAPTER 9
Management and Research

IN THIS CHAPTER, WE ADDRESS two elements that are critical to a successful census. They are: the organizational and management structure within the Census Bureau for taking the census together with the mechanisms for heeding internal and external advice (9-A) and the research, testing, and experimentation program that is required for evaluating one census and planning the next (9-B).

9–A ORGANIZATION AND MANAGEMENT STRUCTURE

9–A.1 2000 Census Organization

The 2000 census organizational structure was similar in broad outline to the 1990 structure (see Thompson, 2000). Of the Census Bureau’s eight directorates in 2000, each headed by an associate director, seven were involved in the census: decennial census, field operations, communications, information technology, demographic programs, methodology and standards, and finance and administration. Five positions were key: the director, the deputy director, the associate and assistant directors for decennial census, and the associate director for field operations.

The director, a presidential appointee, principally handled relations with Congress, the Commerce Department, and other key

Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

stakeholders. The deputy director helped make planning and operational decisions when there was disagreement among other managers; from January 2001 to June 2002, the deputy director also served as acting director during the change of presidential administrations. The associate director for decennial census oversaw all decisions for planning, budget, and operations. Under the associate director, the assistant director for decennial census served as the 2000 census chief operating officer and had direct line responsibility for the Decennial Management Division, the Geography Division, the Decennial Statistical Studies Division (which implemented and evaluated the 2000 Accuracy and Coverage Evaluation Program, or A.C.E.), and the Decennial Systems and Contracts Management Office, which housed the programmers responsible for data processing and supervised contractors for three of the four data capture centers and other contractor operations.

The associate director for field operations oversaw data collection by the Field Division, which at the time of the 2000 census encompassed 12 permanent regional offices, 12 regional census centers, the Puerto Rico area office, 520 temporary local census offices (and 6 offices in Puerto Rico), the National Processing Center in Jeffersonville, Indiana, and the Technologies Management Office (which develops computer-assisted data collection questionnaires and procedures). The Field Division also employed and directed the massive temporary corps of enumerators.

Other Census Bureau divisions involved in the 2000 census included the Population Division and the Housing and Household Economic Statistics Division in the demographic programs directorate and the Planning, Research, and Evaluation Division (PRED) in the methodology and standards directorate.

The Bureau developed a variety of mechanisms for internal coordination. An executive-level steering committee for the census, meeting every 2 weeks, was formed in 1997. It included the director, the deputy director, the two principal associate directors, the associate and assistant directors for decennial census and field operations, the associate director for communications, and the congressional affairs office chief. In 2000 the Executive Steering Committee for A.C.E. Policy (ESCAP) was formed to review the plans for the A.C.E., monitor A.C.E. implementation, review A.C.E. evaluation studies, and make recommendations to the director or acting direc-

Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

tor about whether or not to adjust the census counts for an expected measured net undercount (see Section 5-D.2). Staff teams cutting across directorates were formed as needed for specific tasks.

For external oversight, the Bureau answered to an unprecedented array of groups (see Section 3-C). They included the Congressional Monitoring Board (presidential and congressional appointees), the U.S. House Subcommittee on the Census of the Committee on Government Reform (and successor subcommittees charged with census oversight), the Department of Commerce Inspector General, and the U.S. General Accounting Office. External input was sought from our Panel to Review the 2000 Census, the Panel on Research on Future Census Methods, the Secretary of Commerce’s 2000 Census Advisory Committee, the Census Bureau’s Professional Associations Advisory Committee, and advisory committees for minority groups.

9–A.2 Assessment

The basic structure of the Census Bureau—directorates including divisions, each largely consisting of staff with particular expertise—is typical of large data collection organizations. It is also not surprising that almost every directorate in the Census Bureau would contribute staff and expertise to the decennial census, the centerpiece of the Bureau’s activity. However, the panel’s observations, discussions with key Census Bureau staff, and individual administrative experiences suggest that the census organization was not as effective as it could have been to ensure smooth-running, high-quality operations and data.

Three major problems stand out in the panel’s view. First, there was no project director for the census with authority over the census-related work of all staff involved in census operations. It is common in data collection organizations (e.g., Statistics Canada, private survey firms) to use a matrix approach in which staff from expertise-based divisions are assembled as a team to work on a particular project under the direct authority of a project leader (or similar title). The associate director for decennial census came closest to such a person for the 2000 census, but he did not have direct authority over some staffs, such as those in the field and demographic programs directorates.

Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

Second, across and within directorates, divisions tended to operate as individual fiefdoms. For example, the Population Division developed the address list for group quarters with little coordination with the Geography Division staff that had responsibility for compiling the Master Address File (MAF). The Geography Division staff tended to focus on assigning structure addresses to the correct geographic areas; they were less sensitive to the varieties of housing unit types across the nation (e.g., single-family houses carved up into apartments with a common mailing address) that required careful handling to avoid duplication or omission. As another example, the PRED Division, which was charged to evaluate census operations, was deliberately kept independent from the decennial directorate. However, this separation was carried to such an extreme that PRED staff and staff of other evaluation units (including the A.C.E. staff and subject-matter specialists in the demographic programs directorate) were often not adequately apprised of or able to inform each other’s efforts. Moreover, operational staff were often not cognizant of or sensitive to the needs of evaluation staff for accessible data files and other information. (See also Morganstein et al., 2003, who make a similar point with regard to the lack of priority attached to quality assurance during data collection.)

For many tasks, project groups were formed from staffsof multiple divisions; however, such groups tended to operate not as project teams but as committees, in which each member’s allegiance and lines of communication were primarily with the home division. Consequently, responsibilities were often not clearly defined, and interactions were often highly bureaucratic. For other tasks, needed coordination and feedback across divisions was not obtained on a timely basis. For example, the PRED staff apparently had no input into the design of the MAF database structure; the coding schemes used in the MAF made it difficult to identify the sources that contributed to each address and led to delays in completing crucial assessments. As another example, contractors for the data capture centers had no input into questionnaire design and printing, which put the data capture operation at risk (see Titan Corporation, 2003).

Symptomatic of and contributing to problems in coordination was that standards for documentation of data files and operations and for specification of key variables (e.g., flags for types of imputations) were not uniform across divisions or, in some cases, across

Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

branches within a division. Consequently, the quality of available documentation varies widely, as does the usability of data files that are critical for quality control and evaluation. Generally, the A.C.E Program and other operations specified by the Decennial Statistical Studies Division (e.g., long-form sample weighting) followed good practices for documentation, and the A.C.E. files are easy to use for many applications. However, other data files were not designed to facilitate evaluation; these include the MAF, management operations files, and data processing files. Also, important operations such as the system for processing complete-count data and imputing for item nonresponse in the long-form sample were not well documented, which hampers both internal and external evaluation.

Third, communication with and involvement of outside resource people and stakeholders was far from optimal. In particular, communication channels among local governments, geographers located in the Census Bureau’s regional offices, and headquarters Geography Division staff were often muddied. There was too little feedback from headquarters to the regions and localities about schedule changes and other matters that made it difficult for localities to participate effectively in local review. There was also too little involvement of localities in such operations as determining sensible “blue lines” (the map lines demarcating areas for mailout/mailback procedures from update/leave procedures). Case studies by participants in the Local Update of Census Addresses (LUCA) Program document many instances of inappropriate blue-line designations, which not only complicated local review but also created logistical problems for delivery of census questionnaires (see Working Group on LUCA, 2001:Chap. 4).

We could not and have not undertaken a comprehensive management review of the 2000 census, and so our critique of the census organizational structure is deliberately limited in nature. However, we believe that the problems we have identified affected important aspects of census operations and impeded the ability to conduct imaginative, timely, and useful evaluations of those operations (see also IBM Business Consulting Services, 2003; Morganstein et al., 2003).

Finding 9.1: From the panel’s observations and discussion with key Census Bureau staff, it appears that the

Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

decentralized and diffuse organization structure for the 2000 census impeded some aspects of census planning, execution, and evaluation. There was no single operational officer (below the level of director or deputy director of the Bureau) clearly in charge of all aspects of the census; the structure for decision-making and coordination across units was largely hierarchical; and important perspectives inside the Bureau and from regional offices, local partners, and contractors were not always taken into account. These aspects of the 2000 management structure affected two areas in particular: (1) development of the Master Address File (MAF), which experienced numerous problems, and (2) the program to evaluate census processes and data quality, from which results were slow to appear and are often of limited use for understanding the quality of the 2000 census or for planning the 2010 census.

Finding 9.2: The quality of documentation and usability varies among internal 2000 census data files and specifications that are important for evaluation. Generally, the A.C.E. Program followed good practices for documentation, and the A.C.E. files are easy to use for many applications. However, the lack of well-documented and usable data files and specifications hampered timely evaluation of other important aspects of the census, such as the sources contributing to the Master Address File and the implementation of imputation routines.

9–B EVALUATION PROGRAM

9–B.1 Completing 2000 Census Evaluations

The Census Bureau developed an ambitious evaluation program for the 2000 census. The evaluations that were conducted to assess coverage error in the census and to evaluate the two major coverage measurement programs—A.C.E. and demographic analysis—were generally of high quality, informative, and completed on a timely schedule (see discussion in Chapter 6).

Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

A large number of evaluations of the census itself were planned to cover virtually every aspect of the data and operations (see listing in Appendix I). Some projects were subsequently cancelled or combined with other projects because of budget limitations or because they were superseded by an A.C.E.-related evaluation. The list of completed evaluations, however, is still sizable. These evaluations began to appear only in summer 2003—late for informing users and 2010 data planners (see Appendix I).1 We conjecture that many evaluations were hampered by the need to devote additional resources to the various A.C.E. evaluations and by the fact that many key data files were not well designed for evaluation purposes. Consequently, the evaluation staff had to devote substantial time to obtaining usable data, leaving relatively little time for analysis.

We applaud the effort the Census Bureau devoted to evaluation of the 2000 census. Yet we must note the serious deficiencies of many (but by no means all) of the evaluation studies released to date. Too often, the evaluations do not clearly answer the needs of relevant audiences, which include 2000 census data users who are interested in data quality issues that bear on their analyses and Census Bureau staff and others who are concerned with the lessons from the 2000 experience that can inform 2010 planning. No single evaluation will necessarily speak to both audiences, but every evaluation should clearly speak to at least one of them.

Yet many of the completed evaluations are accounting-type documents rather than full-fledged evaluations. They provide authoritative information on such aspects as number of mail returns by day, complete-count item nonresponse and imputation rates by type of form and data collection mode, and enumerations completed in various types of special operations (e.g., urban update/leave, list/enumerate). This information is valuable but limited. Many reports have no analysis as such, other than simple one-way and two-way tabulations. Reports sometimes use different definitions of variables, such as type of form (self or enumerator), and obtain data from files at different stages of processing, with little attempt to reconcile such differences. Almost no reports provide tables or other

1  

We thank the Census Bureau for providing advance copies of evaluation reports to us (and also to the U.S. General Accounting Office, the Department of Commerce inspector general, and relevant congressional offices).

Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

analyses that look at operations and data quality for geographic areas.

Such limited operational evaluations are of little benefit to users, who need more context than is provided in many of the evaluation reports and who need additional meaningful detail, such as variations in item imputation rates among population groups and geographic areas and patterns of nonresponse for more than one variable considered at a time. (Census Bureau summary files provide item imputation rates for geographic areas, but these rates are for the total applicable universe for a single variable. Many of the rates combine the household and group quarters populations.)

Similarly, 2010 planners need analysis that is explicitly designed to answer important questions for research and testing to improve the 2010 census. One useful analysis, for example, might focus on factors that explain variations in mail response rates, cross-sectionally and in comparison with 1990 rates, given how critical mail response is to the timeliness of census operations and the quality of census data.

Imaginative data analysis techniques—using multiple regression, exploratory data analysis, and graphical analysis—could yield important findings as well as facilitate effective presentation of results. For example, a graphical analysis showing geographic areas with particularly high or low whole-household imputation rates compared with 1990 could vividly summarize a wealth of data in a manner that would be helpful to users and suggest hypotheses for future research and testing as part of the 2010 planning process.

Topic reports from the evaluation program that summarize individual evaluations (see Appendix I) also vary in usefulness. Some of them provide a well-rounded picture of broad aspects of census operations, such as data capture and outreach programs, but others do not add much beyond the individual evaluations. Both the individual evaluations and topic reports tend to focus on specific slices of census operations (e.g., data capture, data processing). There are almost no studies that focus on an outcome of interest, such as the higher rates of whole-household and whole-person imputations in 2000 compared with 1990, or seek to describe patterns among geographic areas and population groups that would be important for users to understand, or seek to analyze explanatory factors and their importance for 2010 research and testing.

Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

It is not too late for more imaginative and focused analysis to be conducted to benefit data users and the 2010 census planning process. In fact, several additional data sources exist that could support rich analyses of many important topics. At the urging of several panels of the Committee on National Statistics (see, e.g., National Research Council, 1988, 2000a), the Census Bureau for the first time is developing a Master Trace Sample of information from all of the various 2000 census operations for a sample of addresses (processes related specifically to long-form items are not included—see Hill and Machowski, 2003). Once fully completed, this file will support evaluations of specific 2000 operations on enumerations, and it should also enable 2010 planners to query the sample as a means of simulating the likely effects of various procedures being considered for 2010. For example, the Master Trace Sample might be used to evaluate the relationship of the number of enumerator visits to the final status of an enumeration (e.g, household response, proxy response, occupancy status not known), which, in turn, could inform decisions about the optimal number of callbacks to require in nonresponse follow-up in 2010. Developing this database—and working with the research community in order to best use it—should remain a top priority.

Other useful 2000 databases for analysis include the A.C.E. P-sample and E-sample files; the Person Duplication Study files of duplicate enumerations of the census and the E-sample; extracts from the Master Address File; an exact match of census records with the March 2000 Current Population Survey (CPS); and the 2000 Census Public Use Microdata Samples (PUMS) files. Each of these data sets could support important analyses. For example, the Person Duplication Study files could be analyzed to answer such questions as: What were the different types of duplicate enumerations in 2000? Among which population groups and geographic areas were they most likely to occur? What do the results suggest for 2010 research and testing and, specifically, for plans to eliminate duplicates during the actual enumeration? The 2000 PUMS files could be analyzed to answer such questions as: What were the patterns of missing long-form data among population groups and across geographic areas? What would be the effects on distributions of using alternative imputation methods? What do the results suggest for improving response in 2010 or the American Community Survey?

Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

The census-March CPS exact match file could be used not only to study imputations, but also to investigate possible reporting errors for census (or CPS) items. MAF extracts, perhaps linked to the Master Trace Sample, could be studied to learn more about the factors contributing to duplication.

We understand that resources are limited, yet careful consideration of priority areas for investigation with already-available data from the 2000 census could lead to important research findings for 2010 planning and could further inform users about 2000 census data quality.

Recommendation 9.1: The Census Bureau should mine data sources created during the 2000 census process, such as the A.C.E. data, Person Duplication Studies, extracts from the Master Address File, a match of census records and the March 2000 Current Population Survey, and the Master Trace Sample. Such data can illuminate important outstanding questions about patterns of data quality and factors that may explain them in the 2000 census and suggest areas for research and testing to improve data quality in the 2010 census and the American Community Survey.

9–B.2 Strengthening the Evaluation Component for 2010

It is critically important for the Census Bureau to strengthen the evaluation component of the 2010 census, beginning with the tests and other analyses that are planned to lead up to the 2008 dress rehearsal. To do this, Bureau staff must first step back and ask what are the priority questions that evaluations should address. Accounting-type reports are important, but they are not a substitute for evaluations that seek to inform users about important aspects of data quality or to develop and test hypotheses regarding factors that contribute to key census outcomes that are important for future census planning.

Second, census operations must be fully specified, and all operational data systems must be fully documented and designed to facilitate evaluation. A recurring theme in the 2000 census evaluation reports is the difficulties encountered by the evaluation staff because

Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

important processes and data files were not properly documented. The Bureau is currently reengineering its software development procedures for all of the data processing, management information, and administrative computer systems that will be required for 2010 (see National Research Council, 2003a). This reengineering should incorporate procedures for each system to provide data that are readily accessible for quality control and evaluation purposes, thereby creating a Master Trace Sample (or samples) on a real-time basis. The availability of evaluation information as part of census operations will be critical if the Bureau is to implement computerized procedures for reducing duplicates as part of the census itself. It will also be critical to timely evaluation and corrective actions for other census procedures and for timely evaluation for planning the census in 2020. Evaluation-friendly software systems should be designed and tested as soon as possible as part of the 2010 census testing program.

The 2010 evaluation program should also make use of statistical tools for data analysis and presentation that can help make sense of large amounts of information. For example, graphical analysis by geographic areas could not only inform postcensus evaluation but also be a useful management tool during the census to identify potential problem areas. Generally, the Census Bureau should seek ways to include real-time evaluations as part of the census, in order to help identify unexpected problems and facilitate appropriate corrective action. An ongoing evaluation program will also be critical to the ability of the planned American Community Survey to change course as needed in order to provide useful data to the public. The preparation by Population Division staff of local-area housing unit estimates from administrative records in the first half of 2000 is an example of a real-time evaluation that was critical to improving the enumeration. These evaluations strongly suggested a sizable net overcount of housing units in the MAF, which led to the special unduplication operation in summer 2000 (see Section 4-E).

A critical element of a successful evaluation program is sufficient well-trained and experienced staff to identify evaluation needs, specify evaluation requirements for census data systems and documentation, and carry out the evaluations. The Census Bureau, like other federal government agencies, is facing loss of experienced staff due to retirement and other reasons. In addition, the need for a large A.C.E. Program in 2010 will place heavy demands on research and

Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

evaluation staff. The Census Bureau will need to give priority to development of technical staff resources and seek ways to augment those resources through such means as involving outside researchers in some studies (as was done for some of the 2000 evaluations).

Finally, the 2010 research, testing, and evaluation program must be open to sharing preliminary results with outside researchers for critical review and feedback. The 2010 evaluation program should also be open to sharing data sets (including microdata with appropriate confidentiality safeguards) that can make it possible for outside researchers to conduct useful studies of the population and contribute to understanding of census data quality.

With regard to the sharing of preliminary results, it is too time-consuming, lengthy, and cumbersome a process to wait until evaluation results have been completely vetted inside the Census Bureau to share them with researchers. Admittedly, the highly critical environment surrounding the 2000 census would caution any agency to be careful about releasing preliminary results, but constrained resources in the Bureau for research place a premium on effective ways of obtaining outside input from the research community. Sharing of preliminary results (clearly labeled as such) should become part of the Census Bureau culture.

Recommendation 9.2: In addition to pursuing improvements for coverage evaluation in 2010 (see recommendation 6.1), the Census Bureau must materially strengthen the evaluation component for census operations and data quality in 2010 (and in the current testing program) in the following ways:

  1. Identify important areas for evaluations to meet the needs of users and census planners and set evaluation priorities accordingly;

  2. Design and document data collection and processing systems so that information can be readily extracted to support timely, useful evaluation studies;

  3. Use graphical and other exploratory data analysis tools to identify patterns (e.g., mail return rates, imputation rates) for geographic areas and popu-

Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

lation groups that may suggest reasons for variations in data quality and ways to improve quality (such tools could also be useful in managing census operations);

  1. Explore ways to incorporate real-time evaluation during the course of the census;

  2. Give priority to development of technical staff resources for research, testing, and evaluation; and

  3. Share preliminary analyses with outside researchers for critical assessment and feedback.

Recommendation 9.3: The Census Bureau should seek ways to expand researcher access to microdata from and about the 2000 census in order to further understanding of census data quality and social science knowledge. Such data files as the 2000 A.C.E. E-sample and P-sample output files, for example, should be deposited with the Bureau’s Research Data Centers. To help the Bureau evaluate population coverage and data quality in the 2010 census, the Bureau should seek ways—using the experience with the Panel to Review the 2000 Census as a model—to furnish preliminary data, including microdata, to qualified researchers under arrangements that protect confidentiality.

Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×

This page intentionally left blank.

Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 325
Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 326
Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 327
Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 328
Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 329
Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 330
Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 331
Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 332
Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 333
Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 334
Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 335
Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 336
Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 337
Suggested Citation:"9 Management and Research." National Research Council. 2004. The 2000 Census: Counting Under Adversity. Washington, DC: The National Academies Press. doi: 10.17226/10907.
×
Page 338
Next: 10 Detailed Findings and Recommendations »
The 2000 Census: Counting Under Adversity Get This Book
×
Buy Hardback | $80.00 Buy Ebook | $64.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The decennial census was the federal government’s largest and most complex peacetime operation. This report of a panel of the National Research Council’s Committee on National Statistics comprehensively reviews the conduct of the 2000 census and the quality of the resulting data. The panel’s findings cover the planning process for 2000, which was marked by an atmosphere of intense controversy about the proposed role of statistical techniques in the census enumeration and possible adjustment for errors in counting the population. The report addresses the success and problems of major innovations in census operations, the completeness of population coverage in 2000, and the quality of both the basic demographic data collected from all census respondents and the detailed socioeconomic data collected from the census long-form sample (about one-sixth of the population). The panel draws comparisons with the 1990 experience and recommends improvements in the planning process and design for 2010. The 2000 Census: Counting Under Adversity will be an invaluable resource for users of the 2000 data and for policymakers and census planners. It provides a trove of information about the issues that have fueled debate about the census process and about the operations and quality of the nation’s twenty-second decennial enumeration.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!