Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 1
PROCEEDINGS, THIRD WORKSHOP Panel to Review the 2000 Census OPENING REMARKS OF THE CHAIR DR. NORWOOD: I would like to welcome you all to this meeting. I think it is an important meeting. As you all know, the role of our panel is to evaluate the census, but one extremely important aspect of that task is to evaluate census coverage, including any possible disproportionate undercount and to evaluate any steps that the Census Bureau might take to adjust those numbers. The purpose of this workshop is to help us to get ready to do that job. We have asked members of the Census Bureau to come to this workshop to explain to us how they will evaluate the census and any possible undercount and, if they should decide to adjust the numbers for the undercount, to come and tell us the scientific preparations that they are making so that they will be in a position to make a professional decision about the adjustment, if they decide to do so, and about the quality of the numbers that they might release. In particular, we are very interested in finding out what kinds of evidence they will develop in order to provide a scientific basis for their decision. So the meeting, I hope, and I am sure, will be very useful to our panel. We want to know how the Census Bureau will make its determination and then, if it ad-justs, what data and evaluation it contemplates to determine whether the adjusted numbers are better than the unadjusted ones. The panel will have to decide how it will approach this issue and this meeting is one part of the information gathering that we are doing. We have invited a number of scholars to help us at this meeting and they and the panel will participate in the discussions, asking questions, raising issues, and, I hope, giving us the benefit of their thinking. Let me, first, tell you who they are. We have, over here, Barbara Bailar, who is, I believe, a senior vice president at NORC, the National Opinion Research Center, at the University of Chicago. Lynne Billard is in the Department of Statistics at the University of Georgia. Jeff Passel [from the Urban Institute] is not here yet; we will introduce him when he comes in. Allen Schirm is from Mathematica—the Academy also has a panel to help to advise the Census Bureau on what it might do in 2010, and Allen is on the 2010 panel. Michael Stoto is a demographer with the George Washington University School of Public Health. Marty Wells is from Cornell. Don Ylvisaker, from UCLA, is also on the 2010 panel, and there is Alan Zaslavsky from Harvard.1 We also are very fortunate to have the director of the Census Bureau here, Ken Prewitt. We are also privileged this morning to have Barbara Torrey with us, who is sitting over there, who, as you probably all know, is director of the Division of Behavioral and Social Sciences and Education, which is the governing body over the Committee on National Statistics [at the National Research Council]. As I have said, the people around the table, I hope, will be very interactive, because that is the purpose of this meeting. At the end of the day, we will provide a chance for anyone in the audience who has any questions or wishes to make a comment briefly to do so. 1 Zaslavsky was added to the Panel on Research on Future Census Methods (“the 2010 panel”), of which Schirm and Ylvisaker are members, in autumn 2001.
OCR for page 2
PROCEEDINGS, THIRD WORKSHOP Panel to Review the 2000 Census Before we begin, I want to emphasize, first, the panel remains open to the examination of any scientific methodology that it believes would be useful in carrying out its task. Secondly, the panel remains open to all points of view. Although we have done a good deal of work to understand the census operations, we have not yet made any decisions about the evaluation we have an obligation to perform. We hope this workshop today will help us in that endeavor. Third, we are all perhaps too much aware of the fact, which I think will become clear during this discussion, that the data required for effective research on these issues will not be available publicly until the material has been put together by the Bureau. We are also very much aware that the Bureau has been undertaking a tremendous operational effort and it is getting things together as fast as it can, but it always takes a good deal of time. It is a monumental task. On the other hand, I am sure that our census friends fully understand that it is in the public interest for us all to have the information needed for effective evaluation of the decisions that we will have to make. Finally, I would just like to take a moment to compliment Ken Prewitt and the rest of the Census Bureau staff for your cooperation in putting together this meeting and the previous meetings and for your professionalism and openness, and for your assistance, really, in helping us to understand what is going on in the census. We will start by having Howard Hogan introduce these issues. The group around the table has a whole set of papers that develop the methodology— of course; the numbers are not there, because they are not available yet. That is where we are starting. OPENING REMARKS OF DIRECTOR PREWITT DR. PREWITT: Just a word or two, if I may, Janet, and John [Thompson] will spend a few minutes bringing everyone up to date on the operations before we turn to Howard. I would like to put some introductory comments in a very, very broad context. Some of you who have good memories will know that after the 1990 census the term “failed census” was fairly frequently used. It was used loosely, that is, without defining what constituted a successful and/or a failed census. Indeed, clearly, 1990 was not a failed census; a failed census would be one that was not used, could not be used, and, clearly, 1990 was used to reapportion, to redistrict, and as the base for intercensal estimates and everything else, so clearly it was not a failed census. Nevertheless, the nomenclature stuck a bit and that is a very bad thing, it seems to me, for the Census Bureau and for the statistical system, to have an operation such as the 1990 census so described. It was not a failed census, not only in terms of the fact that it was used but it was not a failed census even operationally, even though the [net] undercount, as we know, should [not] have moved back up, having declined for the last two censuses. But it does seem to me that it is very, very important in 2000 to bury that label once and for all, because it is simply, as I say, not healthy for the statistical system,
OCR for page 3
PROCEEDINGS, THIRD WORKSHOP Panel to Review the 2000 Census certainly not for the Census Bureau, to live in an environment in which we could even imagine the census failing. John will talk more about where we are operationally, but I do think that one of the things that has been accomplished in 2000 is to replace that label with a set of labels that better describe the census as “successful” or “good” or what-have-you. It does not mean that you still cannot actually have a failure in Census 2000 and that could obviously now happen in the software and in our data cleaning, data matching, and so forth, and we will talk about that a bit today, but we have every reason to believe that is a very, very, very low probability, so we do not any more think of the census along that dimension. I mention that because 1990 introduced something else: Secretary Mosbacher’s decision not to use the corrected data, the adjusted data, overruling the recommendation of the Census Bureau, sending Barbara Bailar to the University of Chicago, and many other consequences. Secretary Mosbacher put into his decision criteria a sentence, which I paraphrase, but basically goes as follows. One of the reasons that he decided against adjusting the 1990 census was, he said, that it would open up the possibility of manipulating the data for political reasons, i.e., somehow the Census Bureau would be able to pre-design a census with a known partisan outcome. Mr. Mosbacher was careful to say that he did not believe the Census Bureau did that in 1990, but it certainly set the possibility that that could happen in future censuses and, therefore, there should not be an adjusted or corrected census. To my knowledge, and I think I have looked enough into the literature to be confident of this, that is the first time a senior official of the United States Gov-ernment put on the table the idea, the prospect, that the Census Bureau could pre-design a census in order to have a known partisan outcome. As you know, although Mr. Mosbacher spoke in the conditional sense, that conditional sense rather quickly got erased and from then on, for the last five or six years, the assumption has been that, indeed, the Census Bureau not only could but would and probably was designing the census, knowing it would have a given partisan outcome. Obviously, people in this room familiar with the census operation know the impracticality of that and also know that it is extremely difficult to predict the partisan consequences, even if the Census Bureau were of a mind to do that. It is extremely difficult to predict the partisan consequences of adjustment or non-adjustment, and I do not need to talk about all that in this room. The point, however, and this is why this meeting is so very, very important, is we need to bury that phrase just as we needed to bury the phrase that the 1990 census was a “failed census.” It is very bad for the federal statistical system and the Census Bureau to live under the cloud that somehow it has the will and the capacity to design a census knowing beforehand what the likely partisan outcome of that census will be. Having said that, there is no doubt that census numbers do have partisan consequences. We are not talking about whether they have partisan consequences or not; we are talking about whether the Census Bureau would design a census in order to achieve a particular partisan outcome.
OCR for page 4
PROCEEDINGS, THIRD WORKSHOP Panel to Review the 2000 Census What is so very important about this meeting, and its deliberation in ways that I will just hint at in a second, is that it is the opportunity to say: yes, census numbers have political consequences, that is why they were put in the United States Constitution, but that is very, very different from saying that the Census Bureau itself has a partisan motivation, partisan capacity, partisan interest, or anything of the sort. Unless we can get rid of that in the vocabulary that surrounds Census 2000, with all of its contention, and so forth, unless we can get rid of that vocabulary and that kind of descriptor, we have done something harmful, it seems to me, to the Census Bureau and federal statistics. That is why we put as much attention to this workshop and, indeed, to the work of the panel as we have. I should say, by the way, obviously this is quite orthogonal to the argument about whether we should or should not correct. That is a scientific decision and it will be made as best we can make in ways we will talk about. There will be critics of that. We welcome the critics. That is not what the question is. The question is not whether it makes sense or not; the question is if it makes or does not make sense, whether that decision is being made on anything other than professional scientific criteria. As you know, the Census Bureau has made a preliminary determination that it is feasible to use dual-systems estimation in census 2000. However, as you also know, we have not yet determined whether the A.C.E. [Accuracy and Coverage Evaluation Program] will meet our expectations and, indeed, we will evaluate both the census and the Accuracy and Coverage Evaluation early next year to decide which set of data to denominate as the P.L. [Public Law] 94-171 data [redistricting data]. I want to say that again. We have determined that it is feasible to use the Accuracy and Coverage Evaluation. We have not determined whether we will. That is what the workshop is obviously about. There are many, many people in Washington, D.C., and other places that do not believe that, who believe we have already made up our minds. Well, it is simply not the case. We will do all the kinds of stuff that Howard [Hogan] will describe in order to make that determination and the panel will, obviously, be examining that and decide in its own judgment whether the kinds of things we are bringing to bear are the right kinds of things to bring to bear. We have issued a feasibility document, as you know, and circulated it to all of you and that is where we set forth why we think it is feasible [Prewitt, 2000]. As you also know, partly responding to Secretary Mosbacher’s decision after the 1990 census, Secretary Daley did issue a federal regulation that delegated the power, the authority, to make the decision about whether to adjust or not to the Census Bureau. That did occasion some conversation, active conversation, in the Congress and in other places in Washington. I should say, to me, it was odd that it occasioned that conversation. We obviously issue the apportionment numbers, and we do a lot of complicated technical things to get the apportionment numbers out without thinking that we would check those numbers with the Secretary of Commerce, and we do not see anything odd about doing the same thing with the next set of operations, that is, the Accuracy and Coverage Evaluation operations.
OCR for page 5
PROCEEDINGS, THIRD WORKSHOP Panel to Review the 2000 Census Indeed, we think that all that Secretary Daley’s decision was, was to reestablish the 1980 pattern, when Vince Barabba and the professionals at the Census Bureau indeed made the decision. It was not until 1990 that for some reason—not “for some reason,” for reasons we know—the decision got moved to the secretary’s office, and it is that anomaly, if you will, in statistical practice that was corrected by the federal regulation notice. From our point of view, of course, it is where it belongs and the Census Bureau will make the decision, obviously. In making that decision, there is a special committee that has been formed, outlined in the feasibility document—we call it the ESCAP committee, the executive committee—for this process.2 It will make a recommendation to the director and, I stress, “whoever that happens to be” when that decision is made. Clearly, as you know, the director’s term is coterminous with the Administration’s term and, therefore, one way or the other, something has to happen after January 20th, irrespective of the party that wins the election. The whole idea about how the decision is going to be made and the delegating of authority to the Census Bureau was really made behind a veil of ignorance; that is, the decision about how to make the decision was made not knowing who would be making the decision. The ESCAP committee will deliberate in January-February, hopefully making the decision by early March. Obviously, that decision will be based on only technical considerations and scientific analysis to the best of our capacity. Now I want to return to the larger theme about Secretary Mosbacher’s observa-tion following the 1990 census about the possibility of designing a census to have a known partisan outcome. One of the things that the Census Bureau has tried to do over the last several years is to dissuade people from making that accusation by being as transparent as it can be. Indeed, we determined that in this highly politically charged atmosphere it was very important for the Census Bureau to try to be transparent, consistent with good statistical practice, though at the edges we have actually done things that would not have been prudent from an operational or statistical point of view in order to display even more openness and transparency. Just a few factoids: I have testified before the Congress 17 times as census director. I have not looked up comparative data but my guess is, Janet, that that is reasonably unusual for an agency head, in less than two years to testify 17 times. We have responded to over 150 letters from the House Subcommittee [on the Census] in the last two years and provided a massive amount of data in response to it. There have been a number of field visits by the GAO [U.S. General Accounting Office], the [Congressional] Monitoring Board, the I.G. [Commerce Department Inspector General], and others—a total of 522 field visits during our operations. The GAO itself has testified nine times and issued nine reports over the last two-year period. The Department of Commerce Inspector General’s office has issued 25 reports on the census operations. Indeed, by my count (it is a rough count, obviously), there are well over one hundred people in those formal oversight operations who are full-time—full-time monitoring the census operation. 2 ESCAP stands for Executive Steering Committee for A.C.E. Policy.
OCR for page 6
PROCEEDINGS, THIRD WORKSHOP Panel to Review the 2000 Census I should say that one of the things that this census monitoring operation failed to do, and I certainly say that publicly in front of our friends from the GAO and the inspector general’s office today, they wrote a large number of reports over the last two years talking about the things for which the Census Bureau was ill-prepared. The one thing they forgot to put into those reports, and this is not just a flip comment, the one thing they forgot to put in those reports is that we were ill-prepared for the amount of oversight we had to deal with. There is not a single person, who ever said, “And you had better staff up for these 25 reports or these 522 field visits or these 17 testimonies.” No one ever asked, “Are you staffed to do that?” It turned out that to be responsive to all of that, we had to deflect some of the important operational management time from Jay Waite and John Thompson and other people, because we simply had to be responsive to that. I would say to the 2010 panel members here today, do not let that happen again. Make sure that the 2010 census is planned knowing that it is going to be as heavily scrutinized as the 2000 census has been. In addition to those formal oversight processes, of course, we have been meeting with our advisory committees throughout the decade, soliciting input. We have tried to be open with the press. I have held 40 press conferences since I became Census Bureau director and, of course, there is the NAS panel. We have also issued our executive state of the census report on a weekly basis and given that to all of our stakeholders. All of our decision memos have been widely circulated. We have obviously pre-specified as much as we can, especially the A.C.E. process—the panel has obviously been paying attention to that. We have extensively documented—I probably do not have to remind you about the documentation that Howard and his team have generated. We are currently providing the agenda and the minutes of our ESCAP meetings to the subcommittee and to other interested stakeholders. I just want to emphasize that to make the simple point that I think that is the price we had to pay and will continue to pay; that is, in order to claim that we are transparent we had to be as transparent as we possibly could be, and, as I say, there were plenty of times in the field operation period, and there will certainly be times in the next phase of the census, where doing that did subtract some of the attention we should have been giving to the job itself. Nevertheless, if it helps bury forever, or at least for the foreseeable future, the idea that the Census Bureau has got some sort of team out there figuring out which political party is going to benefit from this kind of post-strata structure, then it was well worth the price. Now a word or two about the stuff that Howard will be talking about. We, as both Howard and John will emphasize, will be looking at a large number of processes, of data sets, of our own operations, during the tough period between January and February, when the decision will be made about whether to correct the data or not. We will be looking at those data on a flow basis, and we will make all data that we use in that decision process available to the public.
OCR for page 7
PROCEEDINGS, THIRD WORKSHOP Panel to Review the 2000 Census There will be pressures on us, intense pressures, and they will start the minute this meeting is over. It will probably start the minute I finish talking. There will be intense pressures on us to release some of those data earlier than we think is prudent to do so. We will resist the best we can those pressures, and I want to set forth some of the reasons that we will try to resist those pressures. We believe that premature release of some data sets would simply do more harm than good. We have had some very unfortunate experiences already in the census where we have prematurely shared data with some of the oversight apparatus and found that they were not analyzed correctly. We spent a lot of time trying to get them analyzed correctly, and, in the meantime, of course, there is an awful lot of press attention and other kinds of attention, so it creates public confusion, quite honestly. We also think that no data should be released until we have verified them, until we have done all of our quality checking on them. That takes time. We do not want half-formulated data floating around where people have this interpretation and that interpretation until we are as certain as we can be that the data are cleaned, and we simply want to minimize the amount of incorrect conclusions that can be drawn. Also, there is the distinct possibility that any kind of early release could invite the appearance of manipulation; that is, the very thing we are trying to get away from could be aggravated or accelerated by that process. What we need to insist on, going into the January-February period, is that we are not selectively releasing data in order to try to create one or another assumption or predilection about whether we are going to correct or not. I will give you just one example that John Thompson used in a session the other day. Let us say that we find that demographic analysis suggests that there is a large differential undercount, so we release that. We could then be accused of having released that in order to sort of set the predicate that we should be using the A.C.E. That would not be why we would do it, but we could get accused of that, so it is our judgment at this stage that all of the data will be released but none of them should be released until we have looked at them, examined them, weighed them, made our judgments about them, and then shared them at the same time with everyone, to the public, to the panel, to the subcommittee, the Monitoring Board, and everyone who has an interest in these sets of data—certainly to the litigation process, which, of course, will be interested in these data as well. We are very concerned that we do not have bits and pieces of data sets out in ways that could create public confusion or could lead someone to interpret what we have done in such a way as to suggest that we were being affected by political considerations. Finally, I would say, obviously, and I appreciate Janet’s opening comments, we have a lot of work yet to do—very, very major work. It is obviously out of the field and into the offices but we now have to do the kind of data analysis and data cleaning and correction, and so forth, that goes into finally producing the apportionment counts and the redistricting data. We must somehow have the opportunity to deliberate and argue about these data amongst ourselves before we
OCR for page 8
PROCEEDINGS, THIRD WORKSHOP Panel to Review the 2000 Census talk about them. Every decision that we make will be documented. Every datum that we use will be released. There will be nothing “secretive” about it, but we do have to still get the work done. We are on a very tight time schedule to meet our next two major deliverables, which are the apportionment count on the 31st [of December, 2000] and the redistricting data by April 1st, 2001. I just wanted, Janet, to introduce the day that way. To be redundant, I do not mind being redundant on this, I feel so very strongly about it, as I hope everyone in the room does, I want to say that whatever we should have accomplished by the time Census 2000 is over, of all the things we want to accomplish with Census 2000, one of the things we must—must—accomplish is that the Census Bureau itself is not partisan. The data may have partisan consequences but the Census Bureau is not partisan and I really urge those in the room, especially those on the panel, who may be critics of dual-systems estimation—and that is fine, as you all know, we have no trouble with that—we really hope that the criticism of dual-systems estimation is articulated on scientific and technical grounds and not because somehow the Census Bureau can manipulate data, which it does not know how to do. As I have said many times publicly, we do not have experts in voting behavior, we do not have experts in redistricting, we would not know how to go about trying to design a census that would have a known partisan outcome, and especially we do not even know who will be running the Census Bureau or which particular party will have appointed him, when this decision is made. The shallowness of that argument is, I think, transparent itself, but, nevertheless, it sits there in the atmosphere, and I really hope that it is laid to rest. That is why we take this day to be so very important as a kind of collective effort by the scientific community to have a legitimate debate about the methodologies. We hope to use this National Academy of Sciences’ process to set aside what has been, I think, a decade-long and extremely unfortunate charge leveled against the Census Bureau. Thank you. DR. NORWOOD: Thank you very much, Ken. Now I would like to turn to John Thompson. We could not possibly start any discussion without knowing where we are, John, and we count on you to tell us about all the things you have done and still have to do. PLANNED DECISION PROCESS MR. THOMPSON: I will be pretty quick so that we can give Howard a lot of time. I will just mention a few things. We have finished all of our major field operations. As Ken said, we are out of the field, we now have the data back in the offices. There will be one more field operation, which Howard will probably talk about, and that is a follow-up operation as part of the A.C.E. We are right now in the process of closing down our local census offices. All but 43 have been closed, and our schedule calls for 36 being closed by the middle of October and the remaining 7 by the end of the month, and we are moving right along on that.
OCR for page 9
PROCEEDINGS, THIRD WORKSHOP Panel to Review the 2000 Census On our data capture, we have been capturing the data in two phases. We call it pass 1 and pass 2. The first pass we collect and capture all of the information, short-form information, including short-form information from long forms. That is finished. We are now doing the second pass, where we go back and capture the long-form information. What this entails is reprocessing the long-form data. We have already scanned them into our computers and we have stored them. We reprocess them and run our optical mark recognition software and optical character recognition software on them, and then we send what we cannot recognize to clerks for data entry. That is also moving along on schedule; in fact, it is probably moving along just a little ahead of schedule. We are very pleased with that. Data capture centers: we are scheduled to have all the Title 13 materials out of our data capture centers by December 1st. This will allow us to go in and start the de-installation process in order to return the centers to the states that the leases require. We still, though, have quite a bit of work to do. We are now in the process of doing a lot of computer editing of our data files. There are a couple of major deliverables that we are shooting to achieve. One is—and you will hear some of the Census Bureau people talk about this—what we call our “census unedited file.” This is basically a file where we pull together all the information we have collected and we have a file organized, one record for every housing unit in the United States, and for many housing units we have had—I should not say “many,” but a certain proportion of housing units—more than one response because of our census process allowing multiple responses. We have a process in place that puts those responses together into one response for each household. We are running that right now and producing our census edited file. The census edited file is a big deliverable, because that goes into the computer matching and then the clerical matching for the Accuracy and Coverage Evaluation—Howard will probably talk a little bit about that. Jay Waite tells me that we have started the computer matching for the A.C.E. and that is well under say. The next big step for the A.C.E. is clerical matching, which will start October 11th, and we are on schedule to hit that deliverable. The census unedited file also goes through a process that should not be that surprising, an editing process, where we edit the file for inconsistencies, we do statistical imputation to correct for missing data and nonresponse to the characteristics. That produces the next major deliverable, called the “census edited file,” which then is used to tabulate the census tabulations, including the apportionment and the redistricting data. I will just finish the update by noting that we have announced that we anticipate a surplus of approximately $300 million. The surplus resulted from a number of factors, primarily because we did get a higher-than-expected response rate and in our field operations our management of our field offices was more efficient than we anticipated and they did not have to hire as many people. This allowed us to realize some savings. We will probably be talking about that later in the year as we analyze these data more.
OCR for page 10
PROCEEDINGS, THIRD WORKSHOP Panel to Review the 2000 Census With that, I am going to basically conclude. We are looking forward to your discussions and hearing from Howard. I would make one final footnote for Howard. What you see today is our current thinking of the data that we are going to evaluate. As we start deliberating and analyzing the data, we may find that some of the data will not be needed or may not be available. We may also find, as the case may be, that we need some additional data, which we will also be documenting and discussing. With that, I am done, Janet. DR. NORWOOD: Thanks very much. I think, from what I know, the census is probably one of the largest operational efforts ever. Certainly it is the largest in the federal statistical system and I know how hard you have worked on it, so you must be relieved to be able to tell us that a good part of it, at least, is over. MR. THOMPSON: We still have quite a bit left to do. My friend, Jay Waite, down there, and I are not claiming victory yet. DR. NORWOOD: I know enough about the preparation of data to know that there is a lot more to be done. I would like to turn now to Howard. Let me say, before Howard begins, that I would like this to be, for the rest of the day, really, as informal as possible. I encourage those of you sitting around the table—certainly, the panel, I know, does not need any encouragement to raise any questions and to intervene—but I would like to underscore that they should and that the others of you, all of our invited guests, please do the same, because what we want to accomplish today is a full understanding of the approach that the Census Bureau is taking and if we think there are things that should be done that Howard has not talked about, it would be useful for Howard to know about them. If we do not understand something, we ought to speak up. If you have any trouble in getting recognized, please somehow put your name tag up on end so that I can see it and I will be sure to try to call on you. Howard? REVIEW OF THE QUALITY OF THE UNADJUSTED CENSUS DR. HOGAN: Thank you. I would like to begin by thanking the NAS and the panel for inviting us here today to publicly release our plans. I would like to say, walking in today and looking at the name tags and faces, I felt I was among friends—perhaps not everybody who agrees with everything we did, but friends, nonetheless. Amongst the thanks, especially along this back row here, when people introduced themselves and they said, “Census Bureau,” “Census Bureau,” “Census Bureau,” you should have been able to match those names to the names of the documents. They put in a tremendous amount of work to get ready for these documents and to document what we are doing. I would also like to thank the chair and the staff for giving us this opportunity. As John said, we finished [A.C.E.] interviewing. We have started the computer matching process. We have our clerks trained and they are now doing some practice clusters and everything else, so that we will be able to begin clerical matching as soon as things get through the computer matching pipeline, so we are on a roll here and very happy with where we are.
OCR for page 11
PROCEEDINGS, THIRD WORKSHOP Panel to Review the 2000 Census The documents you have are designed as input to the Executive Steering Committee for A.C.E. Policy, or ESCAP. I am going to go through them very briefly and then we will go through them in great detail. [NOTE: The documents provided by the Census Bureau for the October 2, 2000, workshop were drafts, containing background text, proposed topics for inclusion, and table shells, indicating the kinds of analysis the Bureau proposed to conduct to inform ESCAP. The documents contained no results from the census or the A.C.E. Each of the sixteen documents provided in the “B-series” of memoranda corresponds to a document that the Bureau released March 1, 2001, containing full text and tabular and other analytic results. References to these documents in the workshop proceedings are to the final, published 2001 version of each. References by speakers to page numbers in the draft documents have been retained, although they do not necessarily, or likely, correspond to the published version. Note also that, in addition to the completed versions of the 16 documents for which the Bureau provided drafts at the workshop, the Census Bureau released another 3 documents in the B-series (Griffin, 2001b; Mulry and Spencer, 2001; Navarro and Olson, 2001). All of the documents released on March 1, 2001, are available at http://www.census.gov/dmd/www/EscapRep.html.] DR. HOGAN [cont’d]: The first one, which is pretty much how I will be talking today, is the one by me, and it gives an overview [Hogan, 2001]. This is followed by another overview, written by Jim Farber [2001a]. This synthesizes the data in one place, sort of the Reader’s Digest version of the documents. There is a document on the quality of the census processes [Baumgardner et al., 2001]. There is a document on demographic analysis [Robinson, 2001]. Then there is another document on demographic full count review [Batutis, 2001]. There are a number of documents that are on the various A.C.E. processes, and there is, finally, a sort of orphan document on the multiplicity estimator that we use to measure the service-based enumeration of the population that includes what many people call the homeless [Griffin, 2001a]. I will be referencing these documents as we go along today, but I will principally be following the document that I wrote [Hogan, 2001]. These documents say where we are today, what we plan to do. We worked very hard to get here. In planning for the A.C.E., we tried very, very hard to pre-specify everything to say exactly what we were going to do, lay it out there, and, barring very unusual circumstances, that is what we plan on doing. Pre-specification we felt was quite important for how we came up with the numbers that we are preparing for possible use for adjustment. The documents here we do not really feel are appropriate for complete pre-specification. The idea of making this decision, of deciding which data sets are more likely to be more accurate, the corrected or the uncorrected, we do not feel calls for pre-specification. We are taking what we think is our best shot today. If more data flow in that we had not even thought about but would make one believe one set was or was not more accurate, then those are data we plan to collect and we plan to put, first, in front of the ESCAP and then, as the director said, later, publicly.
OCR for page 77
PROCEEDINGS, THIRD WORKSHOP Panel to Review the 2000 Census So this is an input, but I do not want to make this the goal of everything we have done so far, where we come up with lambda and, if it is positive, we adjust and, if it is negative, we do not adjust. It is one input that some people find very useful and some people might be totally confused by it. DR. YLVISAKER: I would argue against loss functions as a decision mechanism in basically any form other than as some sort of contributor that somebody wants to muse over the weekend about, or something like that, because there are too many users and there are too many uses, and so on, for it to be a determinant in any form. DR. HOGAN: I do not think we think of it as a determinant. We think of it as one more piece of information, one more way of looking at the data we have got in here. DR. YLVISAKER: We are not talking about a single number but hundreds of them, say? DR. HOGAN: I do not know if we will have hundreds of them. We will have somewhere between one and 100. We [will] clearly have several. We will clearly have loss functions looking at the count. We will have loss functions looking at proportional shares. Clearly we will do that. We will have them at the congressional district level, and we are going to work to have it at the level of the average state legislative district. DR. YLVISAKER: With targets chosen as you mentioned before? DR. HOGAN: Yes, with our reading of these data. The target, at least with my understanding of the loss function, is fairly complicated in the risk [meaning not clear], but you bring in the undercount the A.C.E. had, that difference, together with your biases and variances around that. DR. YLVISAKER: The biases, which we do not really have. DR. HOGAN: We have our measures of the biases with the uncertainty around them. DR. YLVISAKER: The uncertainty from 2000? I mean, maybe with large uncertainties, yes. DR. STOTO: One way to avoid the difficulty of the loss function is not to use a loss function but to think about patterns. Presumably, what we are talking about here is a change that would reduce bias at the cost of some variance. Of course, that is not the same across the board. Variance is more important in small areas and bias is more important for larger groups, and there are different kinds of biases that, presumably, would be more important than others—if the undercount is concentrated in certain demographic groups or in certain parts of the country or in certain socioeconomic groups, and so on. One thing, maybe, to think about in doing this would be think about different patterns of bias that might be there and different patterns of variance that would be introduced, think about the tradeoff between them. What are the things that we are comfortable with? At what point do we think that we get a better deal by adding some variance to reduce bias? Maybe even discuss them with politicians and decision makers to get a sense of what this is all about.
OCR for page 78
PROCEEDINGS, THIRD WORKSHOP Panel to Review the 2000 Census DR. HOGAN: Leave aside discussing with politicians, I think we do need to return—and this may be a good time—to where we came in. We, the Census Bureau, did not go down the dual-systems route without a history and it is the history of the pattern of differential undercount. While we certainly hope that the A.C.E. or whatever can make the census better overall, that is certainly part of its goal. The principal reason this whole discussion started 25 years ago was the differential undercount. I think in looking at the patterns, the patterns of biases, the patterns of variances, it is very important in our decision process. First, does this census have a differential undercount? If the answer is no, that is going to be very important evidence. If the answer is yes, and the A.C.E. can make this a lot better, all other things being equal, then that is very strong evidence to go forward (a lot of other things would have to be equal). We should not look at this in terms of mean squared error. It arose from a history. DR. STOTO: I guess another way of saying what I said might be to think about what kinds of tables and charts you might be able to generate when the data are in hand that show something about the differential bias and how much that can be reduced and to show something about the uncertainty and the estimates at different levels and how much that might be increased, to sort of think about being clear about the pros and cons of the different options that are on the table. DR. HOGAN: I am getting really close to the end. Let me say just one thing—I will not even dwell on it very much. Our uses of time have been proportional, I think correctly proportional, to the importance. The A.C.E. is not the only statistical adjustment that we are proposing for applying after the apportionment counts. We also have the multiplicity estimator for the service-based enumeration, an estimator where we show up at, say, a homeless shelter—it is a little bit more complicated than this. We show up at a homeless shelter and we ask who is here now and how often do you use homeless shelters. The person says once a week, the person gets a weight of 7. If he says he is there all the time, he gets a weight of 1. This is a statistical model that we will be conducting and looking at, making sure it was under control and, assuming it was, bringing that into the census files as well as the A.C.E. There is a paper here that documents that, if anybody is particularly interested. We can discuss it, either online or off line, but I did want to make sure that it was not completely forgotten in the discussion. I think that brings me to the end—in more ways than one. DR. BELL: On that last topic, I did not see anything that really was an attempt to evaluate the multiplicity assumption or how good that estimator was going to be. Is there anything planned? DR. HOGAN: No, apparently not. It was tested in the dress rehearsal, we have the results in the dress rehearsal. DR. NORWOOD: I want to, first, thank you, Howard. You have done yeo-man’s service. DR. HOGAN: You are welcome.
OCR for page 79
PROCEEDINGS, THIRD WORKSHOP Panel to Review the 2000 Census FINAL COMMENTS DR. NORWOOD: I would like to go around the room and ask our invited guests for comments, if they have any, that they would like to make. May I start with Marty [Wells]? DR. WELLS: It seems from the discussions today in the morning that there are a lot of methods and tables to find the errors in the census pretty easily—it is easier to find those. In the afternoon we talked about finding errors in the A.C.E., and a lot of those were a bit harder to find. It seems it is human error in decision making—when you can find errors in one place and the other errors might be a bit more ambiguous, it is easy to lean to where you can find the errors. Could you just comment on the way that you are going to think about this, because it just seems that with the A.C.E. a lot of the discussion was not as detailed as we had this morning and how are you going to weigh the two, the ambiguity in evaluating the A.C.E. and the more systematically evaluated census, the way you are going to balance that in your committee. It just seems that the A.C.E., a lot of the evaluation and quality issues you talked about, they do not seem to be as solid and on as firm a foundation as the evaluations of the census. It is easier to find errors in the census, maybe, than it is to find errors in the A.C.E. DR. HOGAN: Here is the way I think of it. First, for both of them we have comparison with the demographic analysis estimates and that has been the traditional standard there, and it can be equally applied or misapplied to the census and to the A.C.E. at the aggregate level. In terms of other errors, my reading of these documents is we are spending a lot more time on finding errors in the A.C.E. than in the census. The kinds of errors we have when we have the results of quality control on the one and quality control on the other, some operational measures on the one and operational levels on the other, when we actually focus in on the components, we are really, I think, putting the A.C.E., at least in my viewpoint, under a far more intense spotlight than on the census processes, so I am not sure that I agree with your premise. DR. WELLS: But it seems that the A.C.E., in some of the evaluations it is just not clear, the correlation bias, synthetic bias, various things. You cannot get as detailed an assessment there. DR. HOGAN: Outside of the A.C.E. and demographic analysis comparison, the same kinds of things apply to the census when we do not know much about the coverage from the census errors. The one thing I think makes it a little more difficult is, to the extent that the A.C.E. is successful and has reduced the gap here between the count and the true population, you are measuring a smaller residual. We start out with, at least using 1990, a gap of about 2 percent. At the end of the PES we had it down much smaller and, therefore, seeing how really close you were at the end it was a little bit harder with the PES. Again, my reading of this is we are putting the A.C.E in certainly as intense if not a far more intense spotlight than the census.
OCR for page 80
PROCEEDINGS, THIRD WORKSHOP Panel to Review the 2000 Census MR. THOMPSON: The A.C.E. we put under a much more intense spotlight than the census. We do not even try to measure the response error in the census and the synthetic error in the census or the correlation bias in the census before we apply the results for apportionment. We are putting the A.C.E. under a much more intense spotlight than the census. Those errors are in the census, too. We just do not try to measure them, because we do not have the facility to do it. The A.C.E. being a sample, we have the ability to try to understand it a little bit more than the census. I think I am with Howard, I think we are putting the A.C.E. under a much more intensive spotlight. DR. BILLARD: My reading of what you were saying, Martin, was that we have to separate out operational errors from statistical errors, and we did spend a long time this morning talking about operational errors, and it was the statistical errors that we were talking about in A.C.E. We did not happen to talk about the operational errors in A.C.E. DR. NORWOOD: The papers talk about the procedures and all the same operational errors. In fact, we know more about A.C.E. because it is a statistically designed sample survey, but that is a useful comment, because I think it is important, if that is the impression that has come across, that you be evenhanded. DR. YLVISAKER: I would just go a little further with that, because A.C.E., I think, has to be held to much higher standards and they are arguing that it is. I missed in that section on the evaluation of A.C.E. information about how the quality of A.C.E. data might track the quality of census data, the perceived quality of census data; that is, in local places, where census counts are bad, we are going in and, presumably, correcting them by looking at those people. If we are using data that are of about the same quality, then I do not expect to see a lot of corrections. I think that means it has to be inspected and maybe even more than appears in that particular section. I had some questions about the attention paid to adjustment results. We are looking at 448 times 51-or-so pieces of paper as local people come in, and we look to see if we can see inconsistencies, and so on. My first question is what is an inconsistency, and my second question—of course, the first question has no answer but the second question is what is the remedy, and I think you satisfied me somewhat in that regard; that is, a remedy is that once we have the data we put out our A.C.E. numbers and that is it. I think I got assurance from you on that score, so I will not raise that particular issue, but I am a little disturbed by the stress that has been put on numeric accuracy as opposed to distributive accuracy. I guess I would point in that direction to simply how one is going to look at these two things. We are looking for consistency. How does one look for consistency? One compares. As I say, distributive accuracy is really what happens in the world. We do not say, well, let’s see, did somebody get up to 50,000 here, that means the census is better or the A.C.E. is better, or anything of that nature. It is not done in an absolute sense, it is done by making comparisons with 1990, with everything else, so I think distributive accuracy should carry more weight than numerical accuracy.
OCR for page 81
PROCEEDINGS, THIRD WORKSHOP Panel to Review the 2000 Census I am a little disturbed that it has been made as a preliminary decision that A.C.E. is more accurate. Then I guess it has been solved that this is now the corrected and non-corrected form of the census. DR. PREWITT: May I ask a question? This sounds more flip than I intend it, so forgive me for that, but I do not know how else to frame it. Let us say that we have now finished the basic enumeration work, we are doing our data editing, and so forth. We think we did a first-rate job. We do not think we are going to field A.C.E. Would you feel good about that decision? DR. BROWN: This is a hypothetical question. DR. YLVISAKER: Say it one more time, please. DR. PREWITT: The hypothetical is we finished the census, the basic enumeration census, and we think it went pretty well, we counted almost everybody, and we are tired. We are coming in here today to announce we are not going to do the A.C.E. Would you feel good about the census? DR. YLVISAKER: I have a problem with the census. I think Howard mentioned the problem, it has come up in a couple of ways. When we run the primary selection algorithm we decide that this is the census. If we go back, we say, well, that is the way it came out. We do not go and look to see what the other form said. This is how the census came out. Howard mentioned this with respect to the variability across post-strata; that is in the population, this is the census, after all. We might find it in A.C.E., too, but that is what we are finding, it is the census. DR. PREWITT: Let me then ask you, did the census include imputation? What if we had stopped the census before we did the imputation? DR. YLVISAKER: I did not raise any questions about imputation. DR. PREWITT: I am just curious about at what point in the process do you decide that we now have the best count we can have? We could have stopped prior to imputation. DR. YLVISAKER: I guess my basic problem with adjustment is and will continue to be that we do not know precisely where to put the people. We can run these things, there is no doubt that we can run it, but we do not really know where to put people and there are some pretty convincing arguments that we have got a whole pile of people and we are not quite sure where to put them, and if we have a mechanism for putting them, I say fine, but I do not know that that is a better picture than the census. DR. BILLARD: I have not read all these through as carefully as I would like to. I must say I am very impressed by the material we have been given and the presentations and listening to the discussion today. I am left here sitting and thinking—well, one reaction is that I am amazed at some of the sorts of things that can go wrong or the types of data that you should get that you do not get, I am amazed at how many of these have been thought about by the Bureau and by different people and have answers to them. I am not sure I could have thought of all of those places where it can go wrong myself. I am reasonably comfortable from what I have heard that systems are in place to check things like the matching, movers, and all of these things. I am not going
OCR for page 82
PROCEEDINGS, THIRD WORKSHOP Panel to Review the 2000 Census to sit here, and I cannot imagine the Bureau would, either (maybe they would), and say that the processes in place are the best they can be. I mean, you can always improve with time and, anyway, I have not sat and looked at those algorithms, I have not used them, how can I evaluate them in that sense? If we are going to see results afterwards about how they work and how they do not work, that is the best we can ask for and I am pleased to hear that we will have them. In terms of the A.C.E., historically, we do know historically—and demographic analysis has shown us—that there is a differential undercount, so it seems to me important that we do, again, the best we can in terms of trying to measure that. My understanding is the methods being proposed here will help measure that. I think the same process errors that you were talking about a minute ago relative to the census, my sense is that they are also in place for the A.C.E., so I do not have a problem with that. I do not know what the biases will be, what the variances will be. I do see that the sample sizes are roughly double—is that right?—what they were in 1990, and that has got to be a huge improvement right there, even if that were the only change. That would seem to me a huge improvement. We just have to get the best count, the most accurate. It may not be the absolute correct count but it has to be the most accurate that we are able to do and it sounds to me as if we are on track. DR. BAILAR: I think in previous census years, where we thought about the correction or adjustment of the census, we have used only one measure of the accuracy of the census and that came from demographic analysis, so we were looking at primarily was there an undercount, was there a differential, so I was glad to see some emphasis on looking at the errors in the census itself. I think that is really the first time that has been done. Even so, some of this may not be available at the time when you are making your decision, but will come out as part of the evaluation studies, but there will be some things that are available, and I think they are useful to look at to give you indications of the quality of each of these different pieces. I really am just in admiration of the kind of preparation that has gone into this, with all the tables and prespecification. It is obviously a tremendous amount of work thinking about how those tables are going to look, how they are going to be prepared, at what level, and so forth. I think it is very good, and I am sure it is going to give you a lot of insights into where you have problems, where you want to look further, but it is also going to be, I think, a very good indication to those critics of the process that you have prespecified things as far as you can. I do think your procedures are simplified over 1990, and I think if this is the year when adjustment actually is done, that that is a good thing, though you may want to go back to something else. If you do it once, you are probably going to do it from then on. I think then you can go back and look at other ways of doing things, but I think this year it has to be, probably, as easy as you can to explain it to the public. I think the only thing that dismayed me today was listening to some of the conversation where, even if you are able to make the adjustment this year, that
OCR for page 83
PROCEEDINGS, THIRD WORKSHOP Panel to Review the 2000 Census your problems really are not over yet, because everything is getting more complex and I am referring to some of the things that were mentioned about the race and ethnic groups, where this is getting so blurred that you have to think about new ways of looking at that. DR. SCHIRM: A couple of points. One is more of a procedural thing. It came up several times today that some of the evaluations will not be completed in time to really have a bearing on the decision, and that is sensible. That will require some reliance on results from the 1990 census or the dress rehearsal or other research, and that also seems quite sensible. I think, just procedurally, it would be helpful to make it formally clear and transparent where and how results have been used from other evaluations, be it from 1990 or the dress rehearsal. As I said, I think those other results should be used and used systematically, but just making it clear where they have been brought to bear on the decision. The one other point I want to make is just to emphasize a topic that came up here at the end, which is that I would encourage spending time trying to identify patterns of errors in the adjusted estimates by as many characteristics as you can look at. Here the notion is that we may be willing to tolerate equal or even more total error if we can remove certain systematic errors that we find particularly intolerable, or it could very well be that the introduction of certain errors may be intolerable even if total error is reduced. I think it is helpful to do everything that can be done to look at the adjusted estimates and the systematic patterns in them. DR. ZASLAVSKY: I will try not to repeat things other people have said. This is obviously very challenging stuff, and it is impressive to have such a list of things that you can do before the evaluations. It is like what do you do before the doctor comes when you are in a remote rural area. I guess it seems as if a lot of the things that have been laid out, we have not said explicitly how you would tie all these things together, but, in some sense, what we are saying is that what you are looking for is process evidence of a reasonable degree of homogeneity of error in the A.C.E. and a relevant degree of heterogeneity in census quality, and relevant heterogeneity implying heterogeneity at the levels where you are actually trying to measure, so heterogeneity between New York and Boston is not relevant, because those do not correspond to post-strata, but heterogeneity between rural areas and urban areas is, because that is part of what you are trying to measure. Maybe some of that could be made a little more explicit. Part of what is lacking is that you have said we are going to look at all this information and look for some patterns, but what are we looking for in those patterns? I think you have some ideas about that and maybe that could not be formally prespecified, but the rationale for looking for certain kinds of things could be articulated a little bit more explicitly before the process gets under way. The other comment I would make is that I do not want—I will be the advocate for not downplaying the quantitative comparisons and loss functions and those types of analyses. I find them very useful and important, especially if we can identify levels at which they are likely to be useful, not excessively small levels
OCR for page 84
PROCEEDINGS, THIRD WORKSHOP Panel to Review the 2000 Census of geography but levels that correspond to not necessarily geographical but large population blocks for which we know there are likely to be variations and for which we can estimate whether we have been able to reduce those by adjustment or whether we would not be able to. I know that some of what we would like to have for that is missing. We do not have direct bias estimates for the A.C.E., so some of that will have to be based on 1990, updated to some extent with the process information we have now compared to what we had from the last census, and there could also be some sensitivity analysis to see how much those comparisons actually depend critically on the bias estimates, or whether, under some range of reasonable assumptions about the biases, the evidence points pretty much in one direction. Also, if we are dealing with the synthetic estimation issues, some comparisons of direct estimates to synthetic estimates, again for fairly large areas for which those comparisons could be meaningfully made may be helpful. I would like to see those be part of this, also I realize that stuff is not to be applied mechanically as a decision mechanism. DR. STOTO: I have two things. First of all, I would like to second what a couple of others have already said about how impressed I am with the degree of preparation and thoughtfulness that have gone into these papers and the presentations and how important I think these are. I was last involved with this issue just after the 1980 census. It was just so hard to try to sort these things out after the fact. Having all this done in advance, I think, is going to be really very positive. I really want to commend you all on this. The second thing is, I think that one of the questions that we really have not addressed today, or maybe when I was not here, was how we are thinking about the decision process. Sometimes we were talking about being able to prove that the A.C.E. and the things that go with it are better than the census, and I think that another way of thinking about it is what is the best possible estimate we can make about the size and characteristics of the American population at a variety of levels, and not to give prior weight to one side or the other, but just to think about how, from the scientific point of view, can we do a good job of estimating the population? There may be some people in town who do not want to think about it that way, but, as scientists, I think that is what we have to keep on the table. DR. NORWOOD: Any final comments, Ken? DR. PREWITT: I guess it is not a question that has to be answered right now, but if those who think we should not adjust, the question we put to them, in effect, is, if we are going to adjust, are we doing the right thing; that is, is this a reasonable strategy for trying to make the adjustment decision? Obviously, if you think we should not adjust, then that is fine, I appreciate that argument, but the question is, if, is this the right way to approach that decision? I think that is extremely important for us to hear back on. The other thing I would say picks up on what Mike [Stoto] said. We actually have approached this as a process by which we are trying to get an estimate that is closer to the truth and, indeed, we, even after nonresponse follow-up, did about five
OCR for page 85
PROCEEDINGS, THIRD WORKSHOP Panel to Review the 2000 Census or six major field operations, each one of which we thought got it closer, coverage improvement strategies, doing something about the population count, a number of things, even before we got to the data processing stage, or we would not have done them. If one of them had collapsed, we would not have put it into the final count. If some things do not work out, we do not use them. Then we also do imputation. We put whole persons into the census before we report the apportionment count. We, again, do that because we think it is a better estimate. We see the A.C.E as simply one more step in that process. It is obviously a demanding step, and that is why it is getting the kind of attention that it is getting, though it surprises me how much more this process gets than some of the other things we have done to try to get the estimate closer—it is just surprising to me that somehow this one takes on a life, a political life and a public life, way out of proportion to almost anything we did in getting the first 98 percent counted, but, nevertheless, that is the environment in which we find ourselves. I think the most important thing I would like to say—well, before I say that, let me come back to the panel and, obviously, express the Bureau’s appreciation. I am aware this is all pro bono, you are doing this because you have professional statistical commitment to a quality census. It is extremely valuable to have had the panel as a part of this process over the last year and a half, extremely important to have the Academy have the 2010 committee to begin to move from where we already are in terms of how we will approach 2010. It is appreciated and, obviously, we will all read your report carefully. The entire world will read your report carefully, and I say that because I appreciate the magnitude of the burden on you. You are also going to be scrutinized just as we have been scrutinized. A lot of people are going to read the committee’s report very carefully, looking for nuances, looking for hints, directions, suggestions of internal disagreements, and so forth. This will be a highly visible report for not just the statistical community but for the political community as well. We do appreciate that there is a very heavy burden on you, and we also realize that you would like to make a timely report, and we will do whatever we can to facilitate your making a timely report. Again, just to repeat what I said this morning, I would hope that your report, or somebody’s, if not yours, the Monitoring Board or the subcommittee, would sooner or later systematically address the question of political manipulation. Just to go back to that for a moment, if data are collected by an inefficient organization, then why believe in them? That is why it was very important to us to try to prove that we were an efficient organization in doing the census. But it is even worse if data are collected by a corrupt institution, a politically corrupt institution. Then why believe in it? We really have an obligation, it seems to me, to say to the society when Census 2000 is over, whether we correct or not, adjust or not, there was a reasonably intelligent operation to try to collect information about 275 million people, plus or minus, and it was not a corrupt operation, because if the American public is allowed to believe it was either done by an inefficient, ineffective organization or done by a politically corrupt organization, we somehow are not servicing or making
OCR for page 86
PROCEEDINGS, THIRD WORKSHOP Panel to Review the 2000 Census the kind of statements it seems to me we have to make and should be able to make publicly. I just want to emphasize that once again, the real importance I put on that. I guess my only other comment, then, Janet, is I hope you believe us, and I think maybe most of you do—I am sure the panelists do—this is not a predeter-mined decision. We have not made the decision. We actually are going to look at all of this. Obviously, if we did not think that the A.C.E. and dual-systems estimation would improve the census, we would not have spent the money, the trouble, and the effort to go do it. Obviously, we go into it on the assumption that it will make an improvement, but if it turns out that we think it did not, then we will simply—either it did not because it did not pan out the way we hope it will pan out or because it is not necessary, because, lo and beyond, we did count 100 percent of the American population (doubtful, really doubtful), because I got too many letters from people saying, “No matter what you do, you are never going to count me,” and some of whom actually sent in one hundred dollars (that is the fine). They sent in checks, saying, “I’d rather pay than be counted.” We got some of them through proxy (maybe even all of them). We did send the hundred dollar checks back. I only say we are doing it because we believe in it, obviously, we are a statistical agency, but that is very different from saying we are going to do it, that is, make the decision about whether to adjust or not, based on our analysis in a very, very tight time frame. We, after all, have statutory deadlines and we intend to meet those statutory deadlines. I do hope that the panel and everyone else who is interested in this process appreciates that [it] is a really honest self-scrutiny about the quality of the census and whether the A.C.E.-adjusted numbers will or will not improve it. I think, Michael [Stoto], your formulation of it is correct. It is not is this one better than that one, but do the two of them together produce a better estimate than either of them independently of the other one. That is exactly how we are approaching it. DR. NORWOOD:Thankyou. Let me just say that I think we, the panel, have benefited from hearing all of this. I would like to particularly thank Howard and his whole staff. I know they have done a great deal of work and I know what that takes. I have had enough experience in statistical agencies to know how each of those papers represents massive amounts of work, so I want to thank you for being so cooperative and for providing us with information on this whole process. I spent this summer, a large part of this summer, revisiting the early experience of this country. I suddenly, I do not know why, got interested in the revolution-ary period and the period thereafter. I read a number of things, including a very good set of biographies of George Washington, of Thomas Jefferson, of Alexander Hamilton, and I am trying my best—and if I did not have all these things to read I would have completed one on John Adams that I am about three-quarters of the way through.
OCR for page 87
PROCEEDINGS, THIRD WORKSHOP Panel to Review the 2000 Census That is where this whole approach to the census started. It started, really, because of all the difficulties of trying to figure out how to balance power. They succeeded, I think, quite well, but it does mean that it left the Census Bureau with a difficult job. It is also interesting to me that in spite of their belief that a census was a fairly easy thing to do, you just count people, nevertheless, when the census was done, George Washington and Thomas Jefferson were among the first to say that it was not enough people. It is interesting that instead of the argument being among the various states or even within the various states, the argument was between the United States and the other countries of the world in trying to show that we really were a growing country with growing power and, in order to have growing power, we had to have more people. The 4 million or so who were counted they felt was clearly an undercount, because, as they said, some people just do not want to answer and we cannot find them, and there are all kinds of reasons. It is a very interesting thing to go back to. The point is that they succeeded, I think, in moving the country ahead, and my hope is that whatever you decide to do, and, I might say, whatever our panel decides to say about what you do, which is also very much up in the air, I think you are trying very hard to move things ahead in a very professional manner. I also want to say that the census is important not just for all its uses. The way in which the census is done is critical to the federal statistical system, and I care a lot about the federal statistical system. If the census is considered to be politically dominated, one way or the other, it seems to me that the entire statistical system will be very seriously affected, because I think that people in this country will lose confidence in everything that the government puts out. This is really critical, and it is not critical in terms of whether you adjust or you do not adjust or whether we think if you do adjust that you did the right thing or not—I do not know what the answer is to all those things, clearly we have to look at an awful lot of data to know that. What is important, I think, as you started out with this morning, Ken, is to understand that this has to be a completely professional, unbiased approach. We will try to do whatever we can as a panel to give you our best scientific evaluation of all the processes, and I just hope that we get through all of this, as well as the long form, in the not-too-distant future. I want to thank all of the people here, especially our invited guests who came and participated, all of the others who are here. I think this has been a very useful meeting.
Representative terms from entire chapter: