National Academies Press: OpenBook
« Previous: Front Matter
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

INTRODUCTION

DR. NORWOOD: I would like to welcome you all. We have a purpose today. Our purpose, after a brief update, which we will get from John Thompson, about the fact that the census is still alive and running and, from what I hear, doing well, is to have a very careful review of the design for the Accuracy and Coverage Evaluation Survey, A.C.E.

This is not going to be a workshop to review dual-systems estimation. That is not because we do not realize that that is related very much to A.C.E., but rather because this panel plans to have a separate workshop just on dual-systems estimation late this year or in January. At that meeting, we want to be certain that we have people representing all sides of that issue. As you all know, in 1990 at least, there were people who were very strongly for adjustment through dual-systems estimation and there were many people who were professionally very much opposed to it. I would like to have a discussion of dual-systems estimation with both perspectives available to the panel. We intend to do that so that we can have a technical discussion with people of all views presented. We hope to do that in mid-January.1

Today, insofar as possible, I would like to keep the discussion to the specific elements of the A.C.E. design. I should tell you that I am not naïïve enough to believe that I should forget all that I have learned about the fact that when you look at the design of the survey, you have to think about its uses. I am quite aware of that. But I do feel that we need to have a very careful review of dual-systems estimation and that, in order to do that, we need to have a group of people who are critical of it, as well as a group of people who favor it. I think we will have ample time to do that in January.

Today we have invited a group of people who are skilled in survey and sample design. What we plan to do is, first, hear from John Thompson, who will give us an update on where they are. I am very pleased to see that John seems relaxed and comfortable. He is the man with all of the difficult responsibilities of seeing that all these pieces fit together and that the Census Bureau acquits itself well.

UPDATE ON CENSUS 2000

MR. THOMPSON: Let me just hit a few of the highlights of where we are right now. I will start with the decennial budget. We are operating under a continuing resolution. Thanks to bipartisan support in Congress, we have the money we need to keep operating. So, as last year, thanks to the Congress for understanding the census and realizing its importance and making sure that we have the money that we need to keep operating.

A couple of reports have come out recently from the U.S. General Accounting Office. They are available. One is a review of our Local Update of Census Addresses (LUCA) process. I will talk about that in a little bit. We also spent the summer with some General Accounting Office auditors. They went over our 2000

1  

This workshop was held in February 2000 (see National Research Council, 2001b).

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

budget in great detail. Their report is also out. It documents the budget fairly well. It also describes what is in it. It is a very interesting report, and I recommend it.

We have had a couple of hearings recently at which Kenneth Prewitt testified. One was on how we tabulate data for Puerto Rico. The other one was on the LUCA program.

The Census Monitoring Board has also issued two reports recently. The first one was a joint report that both the presidential and congressional side issued on advertising. It is also a pretty interesting report. Another one came out on the congressional side, discussing the statistics of the 1990 Post-Enumeration Survey [PES] block-level data.

We have issued a decision memo recently, which would be of interest to everyone, I believe. We have described how we are going to tabulate the data for purposes of redistricting under Public Law 94-171. Basically, we have looked at how we tabulate racial data. This is the first time in the census that we are allowing respondents to report more than one race. We had some initial ways to tabulate the data in the dress rehearsal. Basically, we looked at that, our users looked at it, the Department of Justice looked at it, and we have come to the conclusion that the best way for us to support the needs of the country is to provide the multiracial data as collected. There are basically 63 different ways respondents can report race in census 2000. We are going to tabulate those at the block level and higher, so that the users of the redistricting data will have the data as reported, and that will meet the needs of everyone.

We have just finished a big year for the address list. We call fiscal year 1999 the year of the address list. We have done a lot of work on developing the address list. We talked about this a little bit before. We started in the fall, where we listed all the addresses in rural areas. We continued in the winter and spring by doing what we call the 100 percent block canvass, where we took our city-style addresses and went over the ground with our own people. We also allowed state and local governments to review the address list. This was our Local Update of Census Addresses program. We did that for both city-style and non-city-style areas. We have gotten very good participation. We put all of the results of the address list together and have prepared a computer file that we are using to address our questionnaires. We are basically finishing our address work with the final stages of LUCA. We got addresses from local governments, we matched them to our files, and we are now feeding back to them the results of the addresses that we cannot verify. The next stage will be for the local governments to appeal, if they desire.

We have finished all the process for the rural areas, the non-city-style address areas. The local governments are receiving their feedback. We are in the final stages of doing field reconciliation for the city-style addresses. We will finish that next week, and then we will be starting to feed back to the city-style governments the results of what we did with the addresses, so they can decide whether they want to appeal or not.

We are opening up our data-capture centers. We opened Baltimore in June. We opened up the National Processing Center in the Jeffersonville area. We are opening up the Pomona, California, site next week, and then, in November, we

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

are opening up our Phoenix site. All that is going extremely well. The sites are functioning. We are doing an operational test in the Baltimore site. We are waiting to get the results from that. We processed several million questionnaires through Baltimore to look at various aspects of how the system will work.

We are very busy right now printing census questionnaires. Basically, we are printing 24 hours a day, seven days a week. We have 34 printing contracts out there to print over 426 million questionnaires. We have printed about 300 million questionnaires, and, as I said, we have started addressing the questionnaires that we are going to mail or deliver. That process has started. We have had some experience so far with recruiting and hiring. We have hired about 141,000 temporary persons, mostly for address-list development. We are very happy that we have hired over 5,000 welfare-to-work people. That exceeds our goal.

Our promotion outreach program is also under way. We have gotten over 7,000 complete-count committees formed. A complete-count committee is a local government with some partnering local organizations that will agree to work with us to promote the census locally. We are very pleased that we have 7,000 already. We have also gotten over 29,000 regional partners. These are local organizations that have signed up with the Census Bureau to help us promote the census at a local level. We are very pleased with that. We have also hired over 600 of our total 642 staff we are calling partnership specialists, who will be out there in the communities working with these organizations.

The Census to Schools Program is well under way. We mailed out over 900,000 invitations for teachers to participate in the program. We have gotten back over 300,000 requests for materials. We are very pleased with that as well.

That is basically a synopsis of where the census is. Right now we are on schedule. We are in a little bit of a lull right now. We are opening up our local census offices. We are finishing up the address-list work. We are getting ready for the next big stage, which will be the mail-out and recruitment for nonresponse follow-up.

DR. NORWOOD: We will move on now to our workshop. But before I ask Howard to begin his presentation, I am going to take my prerogative as chair and say a couple of things that have been bothering me a great deal. I think it is important for us, as we begin a workshop on A.C.E., to recognize that everyone in this city is running around worrying about the political uses of the census. I recognize that that is extremely important. But I would point out to you all that today, as we look at the design of A.C.E., we should recognize that there are a lot of other uses for A.C.E., quite apart from whether you adjust or do not adjust. Much of the discussion really ought to focus on the fact that even when there are not problems found, there are a lot of uses—trying, first of all, to know where we are—and many of the uses do not get down to the block level. There are many uses of the census which are national in scope, which are regional, which are state, and then, within states, a variety of different kinds of configurations—including, of course, election districts.

Census data are used for program allocation at all levels of government. I hope I can put in a plug for the fact that they are also used for analysis of where this country has been and projections of where it is heading.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

So as we consider the issues that are going to be discussed today, I think we should keep in mind a very broad framework for the uses of the census during the decade. These are the uses that are always overlooked, because people in this country tend always to focus on the particular political issue of the day. Important as that is, having spent a good bit of time in the federal statistical system, I can tell you that there are a lot of other uses of the census data that, in my view, are, in the last analysis, equally important.

So let us try to have a broad perspective of where we are heading.

Howard Hogan is going to present to us the current plans for this survey [A.C.E.]. For each of the topics, we have invited some guests to comment and give us their views. Our panel members always have something to say, and we will all participate. Before Howard begins, I would like to take the opportunity to say that I have spent a lot of time looking at the materials the Census Bureau has provided. I am delighted, really, and a little surprised, that we have received so much so quickly, and in such detailed form. I have worked with the Census Bureau a long time, and I do feel that all of the people at the Census Bureau should be commended for having provided us with as much information as possible, at a stage when, knowing how statistical agencies in general operate, it is very difficult for them to do this.

I want to thank Rajendra Singh for his work in liaison with us and for seeing to it that everything happened quickly and on time. I want to thank John for seeing to it that we had the material we needed, and Jay Waite and everybody else. Howard, I know (because I have been hearing about all this on a daily basis), has done an enormous amount.

I think we should recognize that this workshop is, in many ways, extremely important, because we have a lot of information, and will have—you will hear it all presented—and I do think the Census Bureau deserves commendation for having been as cooperative as it has been. Many of you who know me know that that is quite a statement from me.

OVERVIEW OF A.C.E.

DR. HOGAN: Many of the people who wrote the background materials are in the audience, so you were talking to the people who did the real work. I will convey your message to the ones who could not make it here today.

I want to begin by thanking you and the panel and the discussants and guests and the CNSTAT staff and the Census Bureau staff for coming. Looking over the agenda, I had two emotions, one of which was fright and the other of which was to feel extremely flattered that anybody could possibly look at this agenda and still show up. Thank you very much.

In timing these discussions and choosing topics, we have tried to get a delicate balance between having some real results to present and some substance to discuss and, on the other hand, getting to the point where everything is all sewn up and there is nothing left to discuss. I think Janet and the panel will find us pretty well in that process where we have a lot of stuff here, but we are still at a stage where it will be useful to hear from the panel and get their comments and their insights.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

The first part of the agenda is an overview of status and plans. I am going to use this as an opportunity to review all sorts of things. For the panel members, who have been keeping up with this and hearing this often, this will be a review, but there are other people in the audience for whom I think this probably is worthwhile, to sort of set the stage for our more detailed discussions later.

Where are we in terms of the Accuracy and Coverage Evaluation Survey? First, we have drawn what we call the listing sample. That sample was designed before the Supreme Court decision, back when we thought we were going to take A.C.E., or Integrated Coverage Measurement (ICM) in those days, to 750,000 housing units. So it is a very large sample. Indeed, to support a sample of 750,000 housing units, we would actually do a sample of close to 2 million housing units.

That sample was drawn last summer. It has essentially a few strata worth mentioning here at the beginning, sort of general strata, based on the size of the blocks—that is, blocks that have more than two housing units [in the A.C.E. listing] or that have more than two housing units listed by the census Master Address File [MAF]. That is sort of the general sample. We divide that into two groups of medium blocks, 2 to 30, and large blocks, 30 and above. (I do not know if I got that cutoff exactly right.)

Then we have also a stratum of small blocks, blocks with zero, one, or two housing units in them, and a stratum of blocks on American Indian reservations.

That sample was sorted out last summer, based on the address files that existed at the beginning of the census. We also sorted that sample within states, based on 1990 demography, the most recent we have, to make sure the sample is spread out proportionately within the states. We allocated the sample to the states based on our plans for the ICM, which really was the sample we had designed for supporting state estimates, and drew the sample. Then we printed our maps, printed our listing books, hired and trained interviewers, and sent them out in the field. We have about 5,000 interviewers out in the field. We started address listing in the beginning of September. It is going as well as anybody who has ever run a real survey can expect. If you heard nothing amiss, then you would know they simply were not telling you what was going on. It is going pretty well, we think.

We also have hired and trained our matching technicians at the Jeffersonville National Processing Center. All the A.C.E. matching will be done in one location. We do not talk about it, but that is a huge advantage, made possible because of computerization of the census. We have hired about 50 technicians, and we are training them. We will be training them—we started in September—all the way through the end of the process. This will be a core staff. At each stage of the matching—and we have many stages that I will talk to you about in a minute—we have essentially computer matching, followed by clerical, followed by having technicians doing quality control and problem cases. We have about eight permanent census people out in Jeffersonville that have been matching, some of them, for 30 years, who handle, basically, pathological cases.

The technicians are, in a very real sense, the core of this, because they are the quality control of the large body of clerks. It is an excellent group. Many of them have done matching, either in 1990 or our various dress rehearsals. We are quite pleased with our recruitment of those people.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

We have also designed the methodology to allocate the interviewing sample to the states. We discussed this with the panel, I think, last time. We have to reduce our sample to support 750,000 down to a sample to support 300,000 housing units. We have developed a methodology to allocate that to the states. Based at least in part on some of the ideas the panel recommended, we have the panel’s suggestion implemented of a minimum state sample size of 1,800 housing units, except in Hawaii, where it is about 3,700 housing units. The reason for that is to try to get enough Hawaiians in the race group.

Essentially, we assumed proportional allocation within states and simulated various designs to give us measures of reliability when aggregated up to the state. That sample has been drawn. We are now ready to move on.

The upcoming operations—and this will, again, be important for some of the things we are going to discuss today—we are going to have the listing for A.C.E. going out with the block maps and listing the housing units. Then we are going to have the block-sample reduction. I will talk in a moment about the various kinds of sample reduction, the various steps to get from the 2 million housing units down to the 300,000. Then we will do housing unit matching. That will be done in the early winter. That has several stages—before follow-up, after follow-up. Each one has a quality assurance operation on it, so it is a multi-step process. Then we will do large block sub-sampling. Then right after those, when the census returns start coming back, we will be actually doing personal interviewing—telephone interviewing for a handful of cases, and then personal visit interviewing. Then we will have person matching, person follow-up, after-follow-up person matching, missing data estimation. I think that is on the agenda for later in the day—what we call the population sample [P-sample], the sample designed to figure out who was missed, and the enumeration sample [E-sample], the sample designed to see whether census records are correct or not.

Then we will finally get to the topic of the upcoming workshop, the actual dual-systems estimator and dual-systems estimation. Then finally, we will be carrying that down and adjusting the actual data file.

So those are the steps. The sequence of the steps is important for the way we handle the sample and the sampling issues. Essentially, in getting from the 2 million we have listed to the 300,000 that we are going to interview, we have three steps. One is what we call block-sample reduction. It is sort of an arbitrary term. It helps us keep our words straight from other stages. That is pretty much what we are going to be discussing today when we get to this afternoon on remaining issues for A.C.E. sample design. We have too many clusters, because we selected them for a much larger number of housing units. Which ones to keep and which ones not to keep—that is a new operation made necessary because of the new design following the Supreme Court decision.

That operation has the advantage of doing this in two steps rather than one. We were sort of forced into the one because of the timing, when we had to draw our samples, print them out, hire the interviewers. But, in addition, we drew the listing sample back in June. As John said, the census has gone on and updated their address list and has more recent information about how many housing units

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

in these blocks are on the decennial Master Address File. In addition, as I mentioned, we have gone out and done our own listing. So now we have two pieces of information on all of our sample blocks. We have the updated census MAF, more accurate than was available initially, and the A.C.E. address file. Of course, we have the difference between those. When we decide how to reduce the sample, that is a very important piece of information.

We also have a stage of sample reduction that we have long planned. We did this in 1990. That is the large block sub-sample. Try to keep that separate from the block-sample reduction. This is where you go out and get blocks of 500 to 1000, 2000 housing units, for all sorts of reasons. We do not want to go out and interview 500 or 1000 housing units, so we are going to sub-sample that down to 30 housing units in a cluster. We do that after the housing-unit matching, because we want to segment and sub-sample these large blocks in a way that the housing units that stay in from the population side, our independent A.C.E. listing side, overlap with the housing units that we retain on the census side. As in our nonsub-sample blocks, we have the same housing units from the population side and the census side, so we can match easily and resolve easily. When we sub-sample these large blocks, we want to retain the same segments in both sides. We do that after we have done the housing-unit matching.

Finally, we have our second stage of the small block sampling. There are just millions of blocks out there with zero, one, two housing units. They are very expensive to list or interview. We have done two things this time, one new. When possible, we have associated the small blocks with a medium or large block, thus cutting down the small-block universe at very little additional cost. The universe left over of these small blocks is now smaller than it has been in the past.

But our methodology of handling that is unchanged. We select a large sample. We go out and list. Many of them will have nothing there. Some will have one or two. Some will have 100; some will have 500. As I said, we drew our sample based on the best information we had back in June. Using that information, we will then take a second-stage sample.

Essentially, in our sampling, we have the three stages, the block-sample reduction, the large block sub-sample, and the second stage of the small block sample. That will get us from our big sample to our small sample—well, 300,000 is not exactly a small sample, but if you had been looking at 750,000, you would think it was small.

Sampling is one stage, but then we have the other topic of today, which is how we define our post-strata. With the sampling, what we need to do—we have already allocated, in terms of the block-sample reduction, to the states—we now need to figure out how to allocate that within the states. We would like to retain adequate sample to make sure we can support good post-strata. It is virtually certain that amongst our post-stratification variables will be some sort of race variable. So in allocating the sample within state, we want to take into account a couple of things, one of which is the racial makeup of the blocks within state to make sure we have an adequate representation of the various groups. Unfortunately, our most recent information, as I will remind you several times, is the 1990 census. So we have to go with that.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

Second, we now have more information. We can differentiate between blocks where the A.C.E. lists more addresses than the census and blocks where the census lists more than the A.C.E. We have a lot of information on the housing units in the block that we did not have initially. We can take that into account to select better measures of size in a traditional sample kind of context. Also, if the census is a lot bigger than the A.C.E., that might be an indication of either a coverage problem or a geocoding problem that would have a huge variance implication. If the A.C.E. is much bigger than the census, clearly that would also have huge variance implications.

So in allocating the sample within the states, we are looking at how to take into account our demographic information and how to take into account these new measures of size. That is the topic of this afternoon.

The next stage—and we will be spending, I think, most of the morning on this—is how we define our post-strata. That will be very important in our design. The post-strata serve essentially two purposes. One, that is how we form the dual-systems estimator. For that we want sort of homogeneous capture probabilities. We are also bringing in the E-sample, the probability of the census record being miscoded or being erroneous. We would like that to be uniform within post-strata and as different between post-strata as we can.

But, in addition, we use the post-strata for the carrying down, for the distribution of the sample to the small areas. So our choice of post-strata is very important, first in terms of the DSE [dual-systems estimate], correlation bias kinds of arguments, but also in the ability to set coverage patterns of the local areas. We are spending a lot of time on this and are certainly seeking advice from a number of groups, including this one, on the best way of going about that.

The strategy that we have been using so far is—and we are really going back in history now—if you remember, in 1990, the first set of estimates that we cranked out had 1,392 post-strata. But after the dust had settled and we looked at it, we came up with the set of post-strata that we have been using for our intercensal work, including for the controls to the CPS [Current Population Survey], which we refer to as the 357 post-strata design. That is the one, probably, that people are most familiar with, post-stratifying on race, tenure, age, sex, region of the country, and three measures of size. That is the 357. We developed that around 1992. It seems to have withstood the test of time. People understand it; we understand it. So that is sort of where we have been going in a lot of our thinking.

Just to keep your numbers straight, if you take 357 post-strata divided by the seven age/sex groups, I think you get 51 post-strata groups. For the dual-systems estimate, the age/sex is fairly important, because it makes things a little more homogeneous. Since almost all areas tend to have males and females, old and young, the age/sex has very little predictive ability in carrying down the estimates. So in a lot of our research we focus on the 51 groups, as opposed to the 357.

Beginning with the variables [used for the 357 post-strata] I have just mentioned, we threw in a wide possibility of other variables and asked all sorts of people, “What is your favorite variable,” in a very broad exploratory kind of approach. I will be talking about that, including some variables that not everybody

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

agreed on, but we thought we would throw them in and see what happens. We ran some regressions on 1990 to see how well any of these predict, and we studied their properties in terms of not just ability to predict the undercount, but also their consistency and usability for use in post-stratification.

After we had done this for a while, we came up with a set of post-stratification variables that looked reasonable, and we are going to start simulating them. We selected some candidate post-stratifications, and we are going to simulate them, based on, again, 1990 data. For each of our designs, we are going to compute a predicted value, map that back to the fifty-one 1990 post-strata groups—in the future, we would map that back to the state or the city or something—and calculate the variance contributions, trying to get a feel for the synthetic bias. Obviously, that is an exceedingly difficult task. We cannot predict the synthetic bias, but we can perhaps scale it so that we make adequate allowance for it. Then we will estimate, in a very broad sense of the word, the mean squared error and variance for various proposals on the table.

That work and a lot of the work I will be talking about today is based on the 1990 census, some of it based on the dress-rehearsal data. But then in all this work, we have to translate what we have learned from 1990 to what we might expect in 2000. Something that might predict very well in 1990 may not predict as well in 2000. It is a different census. We have made improvements in a number of areas, which John will be happy to tell you about. We have an advertising campaign, our “Be Counted” forms, other things. So it would be hubris, at best, to assume that the five variables that had the highest correlation coefficients in 1990 would be the best variables for 2000. Things are different. We have to think about what we can infer from 1990, but also what we know about 2000 in making our choices in post-stratification variables. Then we will have our post-strata.

The other research that is going on: We need to work on missing data for the population sample and the census E-sample. In 1990, we had done by Tom Belin and Greg Diffendal, among other people, a logistic regression hierarchical model. But when we went to the 51 separate estimates that we used for the ICM—in the ICM, each state was going to stand alone—we did not think we could support 51 logistic models. We would have to have 51 teams of highly talented statisticians. (I know Alan Zaslavsky does the work of five, but that still leaves us 46 short.) So we went to a much simpler model, a basic ratio estimate model. When we went from the 51 to our current design, the design where we can share information across states—we now have some choices that we want to research: Can and should we go back towards the logistic regression? What does that gain us? Even if we stay with our ratio estimator, we certainly can support more variables now, more slices, more cells. Which are the most important ones to slice? There are some other issues that we can now think about that we could not under the ICM design.

So we have some research going on in that. I am happy to say we now have Tom Belin back working with us.

We have some other research, which we can talk about later. We have decided, for the 100 percent data file, that we will only add person records. The discussion we had at the Census Bureau a year or two ago, where we would add families and households to the 100 percent data—it is only person records.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

However, when we get to the sample data, by which I mean long-form sample— we always have to rake it 10 ways; John was one of the world’s experts on that— that is where we are going to bring in the results not only of the dual-systems estimate I talked about this morning, but we also have the housing-unit-coverage study, trying to figure out the coverage of housing units using a dual-systems methodology. So when we get to the sample data, we will have the results of both the housing-unit coverage and the person coverage. We will try to bring all of that together.

Finally—this has more to do with our DSE estimator—we are working on some research that you have been given on defining the search area, how far around the sample block we look to determine whether someone was counted or not counted. The flip side of that is, how far from the correct block can a census enumeration be and still be considered a correct enumeration? We had some rules in 1990 that were fairly expansive. We are looking at ways of doing it only where it is necessary, but doing it in a way that minimizes variance and bias.

There are some other topics, but I think that is everything that I wanted to say in the time that I have been given.

DR. NORWOOD: You have heard a quick but quite good overview of all of the pieces of this. What we are going to do is go into detail on several of them.

DEFINITION OF POST-STRATIFICATION

DR. HOGAN: The post-stratification plays two roles, how we are going to form our dual-systems estimates, probabilities in terms of capture probabilities and erroneous enumeration probabilities. It is also how we carry down the estimates. We want a similar overall coverage rate. That is really what we are looking for in terms of our post-strata. We use race or age or other things as tags or markers to try to predict these coverage probabilities. But to the extent we are able to predict them, then our estimators will be accurate for all other dimensions.

So I think it is less of a concern with this group. Some other groups get our post-stratification for the A.C.E. mixed up with some questions about how we tabulate the census or what groups are important in American society. Our goal is really to group together people who have similar experience.

As we began looking at the A.C.E. post-stratification, essentially, our first step was rejecting the state boundaries that had prescribed the post-strata for the ICM. For the ICM, we were going to develop 51 separate state estimates. We set that aside in looking at our A.C.E. post-strata because we did not feel that state boundaries carried any real information in terms of chances of being counted in the census, response to the census, linguistic isolation—anything that would relate directly to coverage probabilities. So we set that aside and went back to the 1990 357 groupings.

As I mentioned earlier, we took that group, expanded it, and did some exploratory work using logistic regression, seeing which variables predicted capture probabilities. Then we looked at some other properties of the variables and got

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

them down to a handful. Now we have started simulating the properties of various post-stratification approaches using essentially 1990 data, trying to calibrate the predicted variance for 2000 and also get some handle on the predicted bias for 2000.

As I said, we began by casting our net rather widely, including some things that are a little bit different than things we have tried before. I assume the panel members have their notebooks. (I will do this like I do a graduate seminar. I am used to having students who are far brighter than me, so that will not bother me.)

In Q-9 [Haines, 1999b], we go over some of the variables. The first variable, which will be familiar to many of you, is the race variable. These are the categories we used for 1990, which were analyzed in the 1990 results here as part of this exploratory work. For example, you will see Asian and Pacific Islanders as one group because we are working with 1990. This is part of the issue I will be talking about later. We have to translate these into 2000 concepts.

DR. NORWOOD: Howard, may I just interrupt you? You keep saying that you are using 1990, but I assume that when you get to 2000, it is possible that you will use 2000.

DR. HOGAN: Yes. When I say I am using 1990, I should be very clear on this. As I said, we expanded our range of variables we are going to look at. We are going to do some exploring of what we can learn about the properties of those variables. In that exploring, the data that we have available are 1990. However, at the end of this exploratory process, we are going to define a set of 2000 variables using data gathered from 2000. So for any real definition of a post-stratum for the 2000 A.C.E., we will use race as reported in census 2000, age as reported in census 2000.

But for this first very preliminary step of exploring the properties, we are stuck with the only data that we have, 1990. For that, we are stuck with the race variables for 1990. I will discuss how we have modified those in dress rehearsal and some of the issues in terms of how we may have to modify them. Race, with the new Office of Management and Budget [OMB] directive, is defined differently in 2000 than it was in 1990. So even if these were the best, we would still have to change them.

Age/sex is defined. Tenure I think you are all familiar with. Household composition is one of those dark horse variables that we threw in to see where it would take us. It is fairly complex. I will not walk you through it. But it is trying, with some ideas that came out of our Population Division, to use relationship to head of household to figure out who within a household is part of the count and who is not. Relationship, very simply, “Are you directly related to the head of the household, or the person in column 1, or not,” again exploring the idea that people who are less directly attached to the person in column 1....

DR. NORWOOD: To the reference person.

DR. HOGAN: Yes, the reference person.

The next one—and this is a 1990 variable we are using for exploratory purposes— is urban size. This is what we used in the 357—urbanized areas over 250,000, other urbanized areas, towns and cities, and then non-urban, rural areas. This,

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

for example, is going to have to be redefined for 2000, because these concepts are defined after the census. What is an urbanized area is defined after you take the census. This is a very important variable. We probably have to use some preliminary version of this that is available when we actually conduct the A.C.E.

Other variables we looked at [included] percent renter. Unlike tenure, which is an individual variable, this is looking at the community: Is it an area where a lot of people rent or is it an isolated renter in an area where most people own their housing units?

Response rate is another environmental variable, really trying to tap into response to the census. In areas where many people do not mail back their questionnaires, a lot of work gets done by follow-up. That can lead to errors, missing people and also erroneously counting people. So it is, first, an indicator variable of cooperation with the census, but also has direct operational impact on our ability to carry out the census efficiently.

Again, an environmental variable: percent minority, as opposed to race, because it is in the area that the person is sampled from. Household size we look at, and obviously related things like relationship. Then the next one grows out of work from our Population Division. Many of you may know Greg Robinson’s work on 1990—this would really be a 1990 variable—hard-to-count scores, looking over what makes an area hard to count. It is part of our tool kit that we are distributing to our regional people to help them plan and conduct the census. If we believe that it predicts where the census will be difficult to conduct, maybe that will be a good variable. At least it is worth looking at.

A couple of other variables we looked at: In 1990, we started out, in the 1,392, the original post-strata that I mentioned earlier—we used census geographic division. We really could not support that in terms of our sample size. So we redefined it in terms of region, and the 357 is based on region. But perhaps we should go back and look at division again.

Then another idea is, rather than division and region, which are sort of tabulation concepts, why not see if things are related to how the census was taken, the census regional operational field offices, the 13 cities that were used in 1990. Of course, they are different cities, at least in one case, for 2000.

In 1990, we had about 150,000 housing units. We are going to have about 300,000, so we have roughly doubled the sample size. We think, in terms of minimum stratum size, we were about on the edge in 1990. We certainly did not have lots of extra in terms of some of our smaller cells. From that and from some other research that we have done, we infer that we probably can add one more dimension, but probably not two or three of these variables. It would be possible to add, say, mail response rate, or it might be possible to move from region to division. But you probably could not move from region to division and add mail response and throw in something else. It is just beyond the sample size that even the generous 300,000 housing units can support. So a lot of our thought process is gauged to selecting the most predictive of these.

That, I think, is the essence of this. Are there any questions on this?

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

DR. ZASLAVSKY: Howard, there is no longer any kind of constraint of within-state estimation?

DR. HOGAN: No, definitely not.

DR. ZASLAVSKY: It is all gone.

DR. HOGAN: It is gone. Our post-strata will definitely be cutting across state lines.

DR. SPENCER: May I get some clarification? Are you using direct state estimates that you are controlling the within-state estimates to, or are you not doing that?

DR. HOGAN: No. We are doing very much like we did in 1990, where the direct estimates are at the post-stratum level and then they are carried down synthetically everywhere.

DR. SPENCER: So state-level estimates are affected by data from other states.

DR. HOGAN: That is right. The only exception will be Puerto Rico, where we will be conducting a stand-alone A.C.E. We do not think we can wire a string from Miami or New York or Boston. Puerto Rico will be stand-alone because of the constraint. Otherwise, everything else is shared.

DR. ZASLAVSKY: I know this is jumping to a future topic, but I just need a little bit of an answer to understand what you are talking about here. Are you doing this all on the assumption that the estimation is all going to be direct for every post-stratum and then put together synthetically? Or is there any kind of modeling? Obviously, it makes a big difference, especially on the last point you made about how many dimensions you can add.

DR. HOGAN: As you know, but maybe everybody does not, in 1990, we began with a fairly sophisticated model. We did not return there in planning census 2000. In the dress rehearsal, when we were forced to come up with state-by-state estimates, we were using raking to try to control some of the within-state variations. If you take even an average state, Indiana, and you have to produce not just an Indiana total, but race and age and owner or renter—we felt we needed raking to get some control of that. Clearly, the individual cells in each state will be very, very small.

We are leaning against that now. I do not think we have a decision memo saying we are definitely not raking, but our preference is really to define post-strata large enough so that that will not be a necessity, define post-strata large enough that we will not have to collapse, except if something very strange happens, predefine post-strata that we expect can stand on their own. We have done some work on raking, but we are really not drifting in that direction. But we have not completely eliminated it and will not until we know how many variances we absolutely require.

DR. ZASLAVSKY: You mentioned collapsing. Won’t you need to collapse within some of the racial categories across something else? You always have in the past.

DR. HOGAN: Yes. It is true that if we define a set of classifications by, say, region and tenure and size and race, we will not be able to support that everywhere. We know that. What we would like to do is, sort of in advance, know where we

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

are very unlikely to be able to support it and set it up and sample, and set up our estimation programs and files, so we have already combined them before we go in. I think that is a much safer kind of approach than trying to dynamically run through—this one is too small and then you collapse with that, and that is too small and you collapse with that.

We still may get into that. I will give you one example in a moment. But we are going to try, to the extent we can, to minimize that and to think carefully in advance about where are the few times it would have to be done.

The other thing is, in 1990, we did not collapse across age/sex at all. We sort of maintained that. I am not sure that that strategy has much to really recommend it. It is certainly possible to first begin by collapsing on age/sex and maintain sufficient sample size in these cells.

The one place where we are concerned is Hawaii and the Pacific Islands. Most of the other groups we know enough about from 1990 and the social patterns from 1990 to 2000. We are pretty sure that we are going to have enough African Americans in our sample, enough Asians, enough Hispanics. But when we get to the Hawaiian population, where it is very difficult to do the sampling—it is so diffuse and scattered—we have taken steps to try to maintain a sample there. That is why we have such a huge sample in Hawaii. But it is quite possible that that is one group where we would have to, after we looked at the sample size that actually got drawn, collapse it.

So you are right. We will not be able to support the whole cross-classification, but we would like to pre-identify where we are going to collapse and say, “Do we have enough sample to even support the combined cells?” Did that answer your question?

DR. ZASLAVSKY: Yes. I may raise it again when we get to estimation.

DR. HOGAN: Okay. Then, having defined these variables, one of the issues in post-stratification for dual-systems estimation is that you want a variable that is defined and measured the same in the E-sample of the A.C.E. as in the census— after all, you are going to use the A.C.E. response to catagorize the A.C.E. people and the census response to catagorize the census, and you are going to use the census responses to carry down your estimates to the small area. If there was a systematic bias between how a variable was interpreted in the mailout/mailback census and the face-to-face interview in the A.C.E., or otherwise there was response and recall bias and variance, then the same person can go into two different cells, one depending on how he or she responded to the census and one depending on how he or she responded to the A.C.E. Also, then, when we carry it down to the local level, the group we would carry down would be, obviously, the census responses. To the extent that the undercount is driven by omissions, it would be the A.C.E. responses that dominated.

Let me mention—because this always comes up—why don’t we just borrow one from the other? If someone responds in the census that he is an owner and in the A.C.E. he is a renter, why don’t we just force them into agreement? If in the A.C.E. he is 18 to 29 and in the census he is under 18, why don’t we just force an agreement so that everybody is in the right cell? That would be an excellent thing

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

to do if you could put everybody in the right cell. But, of course, you cannot do that for the non-matches. The whole point of this game is to get the right ratio of matches to non-matches, the enumerated to the not-enumerated. If you start recoding the A.C.E. responses that match the enumerated people based on whom they matched to, and leave the non-matches according to the original responses, you can introduce a whole new dimension of bias that we are avoiding.

So if someone responds with two different ages, census and A.C.E., he will go into two different post-strata. It is important to define the post-strata to minimize this. One of the things we will look at in terms of our variables is how consistently they were reported. Here, unlike the previous one, we are really looking at some of the results from the dress rehearsal, where we have the 2000 kinds of variables. If you flip to [page 1 of] Attachment A [Salganik, 1999], you will see something that says something like “1998 Dress Rehearsal, South Carolina Responses.” Tenure is one of the variables that we are very comfortable with. We researched it in 1980, used it successfully in 1990. You can see—these are for the matched cases— between the P-sample and the E-sample a fair degree of agreement, whether it was owned housing unit or rented housing unit, although a certain amount off-diagonal.

There are some calculations below, percent inconsistent, number off diagonally divided by total, about 5 percent.

Another question is, is there a systematic bias going on here? Are more going in or out? Is it not just a response-variance kind of issue, but a response-bias kind of issue? For that we have calculated the number of non-balanced, the difference between off-diagonals divided by the total.

You can see for South Carolina the percent inconsistent total is about 5 percent— not too bad. Very balanced, though, 0.1.

So that is the kind of information we are thinking about in bringing this in. I will flip through these rapidly, because I am sure you have all studied this at home.

Tenure for Sacramento is about the same, no real difference.

Age/sex is a little bit surprising. This is page 3 of the attachment. Percent inconsistent is fairly low, actually, less than 5 percent. The non-balanced is about half of a percent in South Carolina, and the same kinds of numbers you will see in Sacramento.

But if you want to take time to study this, there are some interesting things going on here. For example, we have males turning into females, females turning into males; under 18 becoming over 50. Probably this is not all response variation. It probably has to do with data-capture variation. We are exploring this. We do know that in the dress rehearsal—we were very explicit about this—the census data-capture system was a fairly preliminary version. I do not know if it was an alpha or a beta or a gamma. We have documented this in our evaluation report separately. There were a lot of data-capture errors created by, simply, the scanning system, separate from the A.C.E., obviously. Those are very, very important, and the Census Bureau has been working with our contractor to improve that system for census 2000. But that kind of data-capture error might be what is leading to some of these huge off-diagonals. We are looking at that.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

DR. COHEN: Are these just really beat-up forms or are there stray marks? What is going on?

DR. HOGAN: Among other things, it is an optical character-recognition system. Its first real dry run happened to be the dress rehearsal. John earlier talked about how we are doing another operational test in Baltimore very soon to test our refinements. But the dress rehearsal was essentially the first time they turned it on, and they had not tuned a lot of this stuff yet.

DR. BROWN: The issue you raise is clearly a concern, about optical recognition. But I kind of wonder, in some of these cases, if some of these inconsistencies are due to the imputed.

DR. HOGAN: Yes, I think imputed are included on both rows and diagonals.

DR. BROWN: The question is, how much of these really wild inconsistencies are really imputations?

DR. HOGAN: I do not have that. I think we can probably tease it out of our files.

DR. SPENCER: How does this compare to the 1988 dress rehearsal?

DR. HOGAN: I really do not know.

MR. THOMPSON: It is a totally different data-capture scenario. We would be comparing apples and oranges.

DR. SPENCER: I am trying to think what kinds of effects this misclassification has on the estimates. It sounds as if you are really going to solve a lot of this problem by looking at the data capture, and maybe imputation.

MR. THOMPSON: Let me jump in just for a minute on data capture. The dress rehearsal was the last level of testing before we put in place the final data-capture system. The problem was, we had a continuing resolution for the dress rehearsal that delayed us over six weeks, in terms of our ability to print the forms. So we could not deliver the forms to the contractor on time. Actually, the contractor did not have to process the dress rehearsal because we were late, but because they wanted to work with us, they went ahead and did it anyway. Since we got a late start, we ran into all kinds of problems in the dress rehearsal, which we believe we have solved. That is why we are running this test in Baltimore, and we are continuing to run several other tests. We are pretty confident that we are going to bring some of these discrepancies into line. The real thing will be to look at the results of the test we are running in Baltimore and see how that applies to these data, before we can measure any possible effect.

DR. HOGAN: In terms of the effect on the estimates, it really depends on what groups they are going between. If they are going between two groups that are very similar, it does not matter much.

DR. SPENCER: Can we say, aside from the data-capture issues, anything about the implications of the multi-race question, whether this is going to increase inconsistency?

DR. HOGAN: Yes. Let’s flip a couple of pages and discuss it. The race issue I will be talking about. Before I get to that, is there anything else on this one?

DR. KALTON: I was just going to ask the imputation question, because that is a very likely source of inconsistency. If you look at the amount of imputed value and look at the amount of inconsistency, it is a fair proportion of that.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

DR. LITTLE: Just to make sure I understand what this is saying, if you have an inconsistency on age, these are people who have different ages, but then in the reconciliation it is determined that they really are a match.

DR. HOGAN: That is right.

DR. LITTLE: So you can definitely determine that that is an error of coding rather than two different people.

DR. HOGAN: Yes. Through the evaluation studies, we checked the matching in the dress rehearsal. I am convinced totally that this is not a matching problem. When we do the matching, we bring in name, relationship to head of household, address, things like that. That kind of overall look at two reports allows us to match either a case where the age is missing or where age may have been misscanned. So I think this is not that we are mismatching people, except for a trivial amount. It really is that we have the right people lined up, and through the A.C.E. data entry, which is not flawless, or census scanning or the imputation, we have created a difference.

DR. LITTLE: I guess I would echo what other people have said. It is important to find out what kind of a problem it is, to really figure out what the implications are.

DR. HOGAN: We will be doing that.

DR. EDDY: So now I am confused. On [Salganik, 1999:Attachment A] page 2 and page 3 that we are looking at, are all the inconsistent cases on page 2 distinct from all the inconsistent cases on page 3?

DR. HOGAN: No, no. We have a matched file.

DR. EDDY: I appreciate that. Sorry, take page 3 and page 5. The point is that you have done a cross-tab for one of the variables. Imagine a higher-dimensional cross-tab for two of the variables. I am asking, are there any cases in common?

DR. HOGAN: I am sure there are. We have not done that.

DR. EDDY: That would seem to me to be pretty important, because it actually says something about the matching.

DR. HOGAN: I am not sure what it says about the matching, but I think it is worth looking at.

DR. EDDY: If you have enough variables that are inconsistent, then maybe the matching is in bad shape.

DR. BROWN: Or maybe it is imputation.

DR. EDDY: I think it would be important to look at.

DR. SPENCER: What are the implications of a bunch of the inconsistencies being due to imputation? Does that mean there is less of a problem or not?

DR. HOGAN: No. I think, in terms of its implications, we are using the final report for our DSE; we are using it for our carrying-down. If we chose a variable that had a lot of missing data that had to be imputed and were not imputed consistently in the two, that would have implications.

DR. SPENCER: But you could change the way you do the imputations to get much greater consistency between the two. I do not know if you were asking this before, Larry. I think you were leading towards it.

DR. BROWN: Yes.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

DR. HOGAN: And if we can do that in a way that treated matches and non-matches equally, we will consider doing that.

Shall I go on and address your next question, Bruce? Your next question was, what about race? The next couple of tables give you some of the results for the race question. Some of this is the same as age. It probably has to do with data capture, imputations, everything else. Some of it may be due—and we are exploring this— to the multi-race question. The dress rehearsal was the first time we have ever, in terms of A.C.E., ICM, PES, dealt with the multiple-race option. We made a decision at that point to put multiple-race people with the largest minority group that they chose. Having no data, that seemed a reasonable thing.

That, I think, created some—some, not all—of this inconsistency of reporting on race. The categories are using our 1998 recode definitions. We are thinking about other ways, perhaps, of coding the multiple-race in a way that gives us more consistency between the two, and probably more power in terms of predicting coverage patterns.

DR. NORWOOD: May I go back to something? John, you said something about how you are going to tabulate race/ethnicity earlier. Could you repeat that?

MR. THOMPSON: Certainly. There are 63 different ways in which an individual could respond to the race question. Our plan for census 2000 is to tabulate the data as reported. That is, for every block, we will tabulate 63 different categories of racial response.

DR. NORWOOD: That is, you will tabulate what people put down.

MR. THOMPSON: We will tabulate what people put down. Howard’s challenge is to assign them a coverage factor. We do not have enough samples to support all 63.

DR. NORWOOD: I understand the difference. I just wanted to be sure I understood that.

MR. WAKSBERG: In connection with these inconsistencies, which are so troubling, there is an issue of how you will classify them for the post-stratification. I assume you will use the census classification for race, for age, since you are going to apply whatever undercoverage factors you get to the census counts. Am I correct on that?

DR. HOGAN: No.

MR. WAKSBERG: I am glad I asked.

DR. HOGAN: We have three things going on. We have the sample population, sometimes known as the P-sample. There, for getting the ratio of enumerated to not-enumerated, by age or any other category....

MR. WAKSBERG: Which age will you use?

DR. HOGAN: We will use the age reported in the A.C.E., age reported in the PES, whatever you want to call it. The reason we do that is, on the match cases we could reclassify age, but on the non-enumerated/non-match cases we could not. Since the whole goal is to get that ratio correct, we could very likely translate a response-variance problem into a very important response-bias problem. If you only reclassified the matches one way and left the non-matches in the original category, you could really mess things up. So for the P-sample, it is the P-sample

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

age. For the census enumeration sample, the E-sample, it is the census age. For the carrying-down, we use the census age.

This is one of the pieces of the puzzle that we need to think about in choosing post-strata. There is no variable where it is perfectly consistent. But before you choose a variable, you should take a moment to see how consistent it is, because it is important.

DR. LITTLE: Yes, but it seems to me that if this inconsistency is due to imputation and problems of misclassification create a bias in the estimate, if the problem is in the method of imputation, that the method of imputation is flawed, you should have a method for assigning these variables that takes into account the probability of a match. So cases that match with high probability, for example, should be receiving the same values of these variables. If you have a match probability of 70 percent, 70 percent of the time the values to be imputed should be the same; the other 30 percent of the time, they should be different. Do you see what I am saying? The imputation should take into account the match probability.

DR. SPENCER: Doesn’t that get a little circular?

DR. LITTLE: I still think the problem is not in the post-stratification variable; the problem is in how you are imputing that variable.

DR. SPENCER: I know, but to get the probability of match, you are going to use the variables—are you going to use other variables in figuring out that probability of match?

DR. LITTLE: I think the problem is, the imputation method is not taking into account the match status.

DR. HOGAN: I would be interested to understand what you are saying, but my bottom line is that I have to treat the enumerated the same as the non-enumerated. To do otherwise is to introduce dependence, where previously you just had variability.

DR. LITTLE: I still think that if you have the right imputation method, then classification by the imputed or true value should not affect the estimate that you get at the end. If you are getting bias because of the imputation, I think the problem is in the imputation method, not in the post-stratification.

DR. CITRO: But these are separate. Imputation is going on for the entire set of census records, not just those that are in the E-sample. I do not see how you could operationally do what I think I hear you saying you want to do.

DR. SPENCER: In some way, and it is not clear how, you would like to do imputations in a different way so you can get more consistency. Is that what you are trying to get at?

DR. LITTLE: Right. If you imputed draws, then circularity would not be an issue, for example.

MR. THOMPSON: Rod, just to clarify, the challenge is to impute for the whole census file in a consistent way, not just the part of the census file that is in the A.C.E. sample. You really have to impute the whole census file, because that is what you are going to be adjusting. The challenge in what you are saying is taking that concept and applying it not just to the 300,000 census cases, but all of the 120 million households. That is the challenge.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

DR. SPENCER: But suppose you took the imputation in the census as fixed. You are going to do that the way you are going to do it. Then the question is, can you do the imputation in A.C.E. in a way that will give you more consistency in the imputed values?

MR. THOMPSON: So you are saying impute the P-sample.

DR. SPENCER: Right.

DR. HOGAN: To make that work, you may have to impute P-sample values that you have responses to already.

MR. THOMPSON: I think the big issue that is coming up is that what you are really saying is, look at the imputation very carefully, which is a very good suggestion.

DR. SPENCER: That is an interesting idea, though. You could even define your post-strata allowing for imputation of an observed value in the P-sample.

DR. EDDY: So if I understand it, you are struggling to keep from increasing the bias, and you are willing to pay in variance.

DR. SPENCER: It depends how big the bias is, Bill. It depends how big the bias is from this. If you have a lot of inconsistencies and they are distributed in a very unfortunate way—I have no reason to believe they are, but if they are—the bias could be very big.

DR. KALTON: Related to that issue, I was looking at the table on page 6 [Salganik, 1999:Attachment A]and looking at Native Americans, for example. You see quite a different count of the E sample from that of the P-sample. This is not just imputation. There are all sorts of things going on there. You can well imagine people answer differently in those two different modes. That seems to me a particular cause of concern. Am I right in saying that?

DR. HOGAN: You are right in saying that. In this particular table, we had a glitch in the coding that exaggerates the concern. But the concern is there.

DR. BROWN: Turn to page 8 [Salganik, 1999:Attachment A], where there are about twice as many errors and inconsistencies in one direction as in the other.

DR. HOGAN: That is a very real concern. That, to me, says it is not just a variability problem, but it really means something different.

In addition—and this is one of the reasons we are setting this on the back burner—this particular variable, if you think hard about it, the classification depends on who else in the family got counted. So it gives a whole new level of complexity to the model that I certainly have not fully understood. But, first, given that kind of causality, and also very much the fact that it is not just some sort of churning but clearly people are in it because of the whole nature of the interviewer’s interpretation, the responses are systematically inconsistent.

DR. BRADBURN: I would just make two generalizations to the category, because I think the household composition is a good example of that. That is, categories that we know from developmental work are hard for people to do are more likely to be inconsistent if you ask them twice. If I understand this correctly, it may not be the same respondent.

DR. HOGAN: That is right.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

DR. BRADBURN: So the person who fills out the form in the E-sample may not be the person who is filling it out in the P-sample. So any of the categories that are difficult for people to understand, of which household composition, we know from some earlier work, is one of the more difficult ones, is likely to be inconsistent.

The other generalization is, the more categories you have, the more likely it is you are going to have inconsistencies, from a whole variety of issues, some of which are cognitive and some of which are simply recording errors, for instance. So if you go to 63 possibilities for something, you just know you are going to have inconsistencies, because there are more opportunities to have inconsistencies. If it is a dichotomous variable, it is easier than 63 possibilities.

Those two things should guide you a little bit, I think, in thinking about good candidates for stratification on this particular issue. It may be overridden by other things.

DR. ZASLAVSKY: Another thing I would like to see—you have taken this up to a certain point, which is just doing the cross-tabulations. From this you can draw some conclusions about how much effect it is likely to have on the estimates. For example, if 5 percent of the people are being misclassified and that 5 percent is 3 percent different in the unmatched cases than in the matched cases, then your estimate of undercount changes by 3 percent of what the undercount was—that kind of calculation. That makes these differences of a few percent seem not very important, whereas the differences of being off by a factor of 2 on the estimates of Native Americans in an area with relatively few Native Americans—we know that people answer that question badly in the areas where the correct group is very small. It is easy for the errors to be very large compared to the correct group.

Can you play how it would work through the calculations in order to affect an undercount rate at the end of the road, so we can actually figure out whether these things are cause for concern, rather than just looking at some kind of an aggregated number and saying, “Two percent, that must be all right”?

DR. HOGAN: I think it is worth working through. This particular study is really a cautionary tale to precede the one I am going to talk about. The next one we are going to talk about is when we just took the P-sample variable and used it to predict P-sample coverage probabilities. It is easy to get carried away and say, “Gosh, I found one that just predicts wonderfully.” Before you go too far down that road, you have to ask how well it is going to work in the whole system.

So here is an example of a variable that, if you just throw it in a regression model on the P-sample side alone, looks really good. But this work gave us pause and said there are some other things going on here that make it not quite as good as it might otherwise look.

DR. SPENCER: I got lost there at the end. It predicted whether an individual was—looking at the P-sample—and you say whether this person was enumerated in the census, and you have a variable that works really well for that.

DR. HOGAN: It works well.

DR. SPENCER: But that is misleading, why?

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

DR. HOGAN: If you look at this table, [page] 8 of Attachment A [Salganik, 1999], we are going to be using this variable for three purposes: to classify P-sample people, to classify erroneous enumerations, and for carrying-down. If it is reported inconsistently in the two—and, in this case, in a fairly systematic way— then it may be very good for purpose one, predicting coverage inclusion, but less suitable for the other two uses.

DR. SPENCER: So how do you decide?

DR. HOGAN: On this one we decided to err on the side of caution, not to get too excited about a bright idea because we did not understand this inconsistency, and we were concerned about the causality that I mentioned earlier.

DR. SPENCER: I mean it as a general question about how you evaluate alternative post-stratification schemes. The choice of variables and questions of consistency are related to that. I do not know. For instance, you could say, what do we do in terms of our estimates; how do the sex ratios compare to what we find with demographic analysis? Then you can run that for any post-stratification scheme you want, and you can do other things like that.

DR. HOGAN: At least on a subset of our variables, we will be doing that.

MR. THOMPSON: I think what you are going to see today—Howard has looked at these variables. There are at least three things. One is the consistency of reporting. One is the variance-reduction properties. And he has been trying to get a handle on the degree of bias reduction we might get, by comparing it to demographic analysis. When you look at those three things and some other things, then you have some information with which you can make a judgment on the groupings of post-strata to best serve your purposes. Did I mischaracterize it?

DR. HOGAN: No. You characterized it very well indeed.

DR. NORWOOD: Howard, if you have the people looked at in relation to other members of the household, and the reference person in the A.C.E. is different from the reference person in the census, isn’t that going to throw everything off?

DR. HOGAN: Yes, yes.

DR. NORWOOD: Isn’t there a very good likelihood that that could happen?

DR. HOGAN: Yes. That is exactly why we did these tables. It was questions like that. We said, let’s look at the data. That is very, very much a concern. One is predominantly a mailout/mailback form; the other is a face-to-face interview later. How that question is interpreted in sequence and whatever—the other thing is, let’s assume that the census coverage is better than the PES coverage, the A.C.E. coverage, and it is much better at picking up an additional adult member of the household. Then that changes the whole household composition for everybody in the household. The variable then becomes very confused.

MR. WAITE: When you combine that kind of situation with what Howard said, how we decide and answer Bruce’s question, we are going to be very careful and very conservative. That suggests that it would take a rather large gun and not a small pistol to go into someplace where we are having an awful lot of problems with consistency, and we do not have much confidence that we are going to be able to get consistency. This stratification variable might make it through, but we are going to have an awful lot of obstacles in there.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

MR. WAKSBERG: Howard, in connection with this, it seems to me that there is a somewhat clearer definition of what the goal of A.C.E. is. Is it to detect undercoverage, or is it to detect errors in the census counts of the number of blacks, for example, Hawaiians, and so on? The two are not the same question. The second one gets involved in errors of classification. I assume that the purpose of the A.C.E. is really focused on people who are missed or counted twice.

DR. HOGAN: That is right.

MR. WAKSBERG: Should you be mixing up these two concepts?

DR. HOGAN: We try not to mix them up. Our goal is definitely not to correct errors of classification.

MR. WAKSBERG: In some of this discussion, we have really been talking about how errors of classification affect this.

DR. HOGAN: We are talking about how errors of classification affect our ability to predict the number of people, let’s say, in a small area. But we are not trying to focus on errors of classification as a goal itself. That is why if the street address, the name, the relationship, and everything else lead us to believe that two people are the same, we will match them, even if one is white and one is American Indian. We are interested in whether that person was counted in the census, not whether he was classified properly. But errors of classification have implications in our ability to estimate coverage for any group and estimate the count for any geographic area. Those are our tags, our markers.

DR. KALTON: May I just pursue this issue of consistency that you keep using as a term, as distinct from the net bias issue? If you think about, say, a random model of people answering this question, answering their race, there is a certain random component to it. When you ask it one way and you ask it a second time, you get a different answer. If you think of the P-sample and the E-sample in that mode, you would get inconsistency, but it would not necessarily matter, would it, to what you subsequently do?

DR. HOGAN: For example, let’s use the 1990 numbers. For all practical purposes, the undercount of African Americans and Hispanics was very similar, 5 percent and 4.5 percent. If there is a big movement between the two and it goes in both directions, it has very, very little effect. On the other hand, if there was a big misclassification between whites living on reservations and American Indians living on reservations—I do not think there was, but if there had been—then we are moving people from an undercount of 3 percent to 18 percent. So it depends on which groups you are talking about.

The other thing—and we have sort of lost track of this—is that each one of these things is, at most, one of several things we are looking at. We are looking at, say, moving between race groups, but we will also have post-stratified them by tenure and by size of place and by something else, perhaps. So it is one of several markers that have to work together. One of the markers perhaps could be somewhat off if the other markers work well. But in choosing a marker, it is something we need to think about. It is important, but it is not the whole story.

Let’s move along here. The next part of this is to see how well some of these work as predictors. This is Q-6 [Haines, 1999a]. Again, the data we were just

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

looking at were from the dress rehearsal. In terms of exploring the implications of some of these variables, returning to the 1990 data set, we took these variables and did a logistic regression, trying to predict the probability of inclusion in the census. That is, we are only looking at the P-sample side; we are only looking at their chances of being counted. How well these predict the chance of being counted is part of the story—an important part of the story, but only part of it.

If you turn to Table 1 [Haines, 1999a]—I am not going to go through every cell—I will show you the kind of research we have done. For non-Hispanic white, our sample size is quite large, 245,000. We did a logistic regression, putting in a number of things, some of which dropped out—age, sex, tenure, household composition, the urban size variable, no response. You can see which ones came out significant.

The Wald test statistic is there, together with degrees of freedom. Those who have an intuitive feel for that should speak now and explain it to the rest of us. Next to that we have the Wald p-values, many of which go out several decimal places in terms of range of probabilities. Because of degrees of freedom, you cannot just scan down and say which are the biggest. But this gives us some information on how well these might work together in terms of predicting the probabilities, which ones are important and which ones are not.

It can be a little misleading because, with the logistic regression, the number in some of these cells might be very small. I will point out one case where it has a very significant p-value, but I am not sure how to interpret it. That is my favorite cell here, which is the urban variable for American Indian reservations, which came out significant. We had a couple of clusters. We had a small town on a reservation, so a couple of clusters were classified as urban. So you have to use this with caution.

Let me flip to the next one. I think this is a little bit easier to interpret. These are the odds ratios for the various variables that went into the model. For the control variable, we had an odds ratio of 1. It is pretty obvious, in scanning the table, which one was the control variable. For age, the control variable was 50+ females. You can see the odds ratios for the other groups relative to that. Nonowner/owner, .66 versus 1. That has a certain amount of power. The household composition, just crudely looking at it, was a slightly better predictor than relation to household. Size did not fall out.

Interestingly enough, for urban size, the rural areas were the hardest to count. The odds ratios were higher for all the other categories.

Let’s turn to region, because that is one of the ones that has been controversial. What does it really mean to be in the Northeast or the West? What does that tell us in terms of chances of being counted in the census? After you have controlled for owning, size of place, race, age, all these other variables, what does being in the Northeast tell you? It tells you a little. (I am on Table 2, page 8 [Haines, 1999a].)

For non-Hispanic whites and others, the odds ratios were lower for the East, Northeast, and Midwest, and somewhat higher for the South. Just sort of intuitively looking at it, it might indicate that we do not really need four regions; we just need two.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

DR. ZASLAVSKY: Isn’t this a bind you have created for yourself by not modeling? The only way you can introduce region is by interacting it with every other variable that you use, if you do not model.

DR. HOGAN: Well, no, we could—as I said, we have what we did in 1990. We have region there, at least for non-Hispanic whites. So we do not have to have the other variables. It is really which ones have the most predictive power—but if you believe that everything we could dream up and throw in is important, then we will be leaving out dimensions.

DR. ZASLAVSKY: But you have shown that after you control for all these things, region is not a huge deal, but it is something. If you go out and tell people in the South that, conditioned on everything else, they have a bigger undercount, with an odds ratio of 1.25, relative to the West, and the Northeast has a lower undercount, with an odds ratio of.87, but we are going to ignore that because we had other variables that were better and because we have too many cells already to cross them with region—isn’t that a bit of a problem?

That is why we model. I just want to emphasize here what the cost is of not modeling. You have to throw away some variable that, when you look at it in a model like this, looks very important.

DR. HOGAN: That is a cost of non-modeling. The cost of modeling, besides the complexity, is the chance of throwing in spurious variables and finding spurious results there.

DR. ZASLAVSKY: That is a risk either way, right? If you make too many cells, you can also throw in spurious variables.

DR. HOGAN: With our current approach, we are very limited. We are going to choose only a handful of things that we do understand well. The approach I am hearing from you is, if you think it might be important, put it in—similar to what we did in 1990. Yes, you can get a lot of variables in that.

DR. ZASLAVSKY: No, but with a given number of parameters to be estimated in your model, you can use them in different ways. You can put in more main effects and fewer interactions or you can put in very few main effects and all possible interactions. If you do no modeling, if you just estimate every cell directly, then you are constrained to only put in main effects with all interactions, except to the extent you reduce that by ad hoc collapsing.

You just presented us with a model that has the opposite philosophy, and it shows us a lot of things that look interesting, even in the presence of all the others. Now you are telling us that you have to get down to about five of these in order to be able to include all these interactions you have not even told us about.

DR. HOGAN: Let me hear from Bruce.

DR. SPENCER: It is very related to what Alan is saying. If you took 357 dummy variables and put them into your logistic regression model, then you have the post-stratification that you used in 1990 rflected there. Then you could say, maybe we should collapse some of the highest-order interactions there. Or you could even take that and then say, what would happen if we added on another variable that looked interesting, at the cost of a few extra parameters? So you have more flexibility if you do a logistic regression model. It includes post-stratification as a special case.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

I am wondering—aside from things like the fact that your computer programs are written—why not use logistic regression to develop the adjustment factors?

DR. HOGAN: This logistic regression is focused only on one aspect of the problem, which is how to predict the capture probability. You could have another logistic model, I guess, to predict the probability of being erroneously enumerated. But then you still have the one, which is the carrying-down. For whatever thing you come up with, you have to have a simple way to apply that.

DR. SPENCER: Even now, you do not have to use the same post-stratification for adjusting for erroneous enumerations and adjusting for undercoverage, because you could have an undercoverage set of post-strata and an erroneous enumeration set of post-strata, and any given individual will be classified in one undercoverage post-stratum and one erroneous enumeration post-stratum. They do not have to be the same.

DR. HOGAN: I do not understand how you would compute.

DR. SPENCER: A simple example: Suppose you use sex for post-stratification for undercoverage and you use age for post-stratification for erroneous enumeration. Then you take any person who has a certain sex and a certain age, and you know what undercoverage post-stratum he is in and what erroneous enumeration post-stratum he is in. You pull the adjustment factors from the respective post-strata.

DR. HOGAN: So, essentially, in terms of DSE cells, you would have the cross-classification of those two.

DR. SPENCER: They would just multiply.

DR. ZASLAVSKY: You would only have one plus six parameters estimated. You would not have 13 parameters.

DR. BROWN: You would rely on an independence model for those two? You just said you would multiply. That sounds like an independence model for the cross-classification.

DR. ZASLAVSKY: No. There are two different factors that describe two different things. I do not think Bruce is seriously suggesting that model. He is just trying to illustrate how you do the calculation.

DR. SPENCER: I have not thought through the independence aspect.

DR. NORWOOD: There is a big difference, I might say, in being in the Census Bureau and having a discussion.

DR. LITTLE: I would like to slightly elaborate on those comments and say that, instead of having region here, you could have state. I would like to have seen state carried along in these logistic regressions, because a key issue would be, after you control for these other variables, post-stratifying variables, the differences between states. Differences between states are a really key issue. If you did this with state there and you still had differences between the states, then I think it would be a serious problem to not include state in the model. And by doing cross-classification you are saying, “I cannot include state now because I have 51 variables there that I have to add on as another cross-classification variable.” It is a very constrained way of creating the post-strata.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

Another way of creating the post-strata would be just to create post-strata directly on the predicted probability of a match. You could then create six post-strata that would include all the information.

DR. HOGAN: Yes, there are a number of approaches that we could have chosen.

DR. LITTLE: But if you choose that approach, then you have the possibility of saying that you are accounting for differences between states in this model.

MR. THOMPSON: Let me jump in and defend Howard a little bit here. We sort of placed some constraints on Howard. What he has to do has to be, one, a very straightforward methodology, and two, something we understand from previous work, which rules out a lot of the modeling stuff. The reason for that is that we have to operationalize this process, we have to conduct it in a very short timeframe, and we have to validate that it is working. Given that, we have really placed Howard in a position where he has to carry out something in an incredibly short timeframe and look at the results and understand that they have been carried out correctly. So our goal is not to do the optimum modeling here; our goal is to do something that is fairly simple, fairly straightforward, will meet the goals of producing undercount factors, but also will allow us to verify it and implement it and tabulate it in a very short timeframe.

Given that, we put Howard in a position where he has to do something that basically is using some post-stratification and does not use a lot of modeling. After the census, we might be able to do some research and use some of these techniques to see if the stratification used was good and worked. But we really have to carry this out in a very short timeframe. We have to validate that our computers work. We have to understand the anomalies in the data. It has put Howard in a position where he has to do something fairly simple and straightforward.

DR. HOGAN: And also has to be sort of explainable. The cross-classification approach is certainly something that we can explain, validate, show people. There are a number of people in the Census Bureau who are studying the logistic regression approach, understanding it more and more every day, solving some technical problems that have never been solved—and then finding that you had already solved them, Bruce.

DR. SPENCER: Or got stonewalled by them.

DR. HOGAN: Or got stonewalled by them. But meanwhile, in terms of getting ready for census 2000, getting our programs written, we have adopted an approach based on essentially taking the 1990 approach and expanding it somewhat, not starting again from scratch.

DR. ZASLAVSKY: So where does the raking research fit into this? Essentially, the model weare talking about is like that.

DR. HOGAN: The raking research was very important when we had the state variables, the state estimates, and we really knew we could not possibly support it. We would like to develop post-strata that we will not have to rake. Raking, in our research—I hope I characterize this right—would be very good if there were only two dimensions that were sort of independent; you could really rake them. Once you get to multi-interactions, then the raking does not buy you much and becomes

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

fairly complex. In the dress rehearsal, we just separated off tenure and everything else in two-way raking. Even within the dress rehearsal, there were some quirks that that implied. If we pursued it seriously, especially with our bigger ability of the data, bringing in every interaction—a handful of interactions, then it might be arguable—it loses its efficiency and becomes more complex. Our basic approach is to define post-strata that are more likely to stand alone.

DR. LITTLE: May I just make two comments? One is, I think creating post-strata based on predictions from a logistic regression is not very difficult in terms of operational complexity. At least I would need to be persuaded that it is more complex. It may be a little bit, but it is a very minor increase in complexity. I think there is actually somewhat of a gain in simplicity because you have a smaller number of post-strata at the end than 357, or something. So I would still argue that that might be a better way of doing it.

I just want to make a comment about simplicity versus complexity. I do really understand the fact that the Bureau has to do stuff that is simple and reasonably transparent. On the other hand, I think from the point of view of assessing a technique, it is important to somehow show that the method based on a simple model is not doing a lot worse than maybe a slightly more complicated thing that is based on a much better model. If you can convince me that here is a very complicated way of doing something, here is a simple way of doing something, and the simple is close to the complicated, so it is not worth the additional effort, then I am quite happy to go with the simple. But I am less happy if the evidence shows that the simple model has real deficiencies, but I am only going to go with the simple model because I cannot deal with that more complicated version.

I think there is a real distinction there between the circumstances under which I would buy a simple model versus not.

DR. SEDRANSK: This follows very well on Rod’s comments. This is an exploratory analysis. It is actually identified very clearly in bold face as that. Yet I think what we are looking for are the marginal gains from adding a post-stratum factor, for example. I have seen very little of that in the corpus of it. There is a selection of variables and a set of things that look good because the p-values are small. Yet what you really want to know is what you are gaining by adding one or by deleting one or deleting two. I think that is really missing here. It would address Rod’s concern. If you really simplify it, show that you have not lost much by not putting these things in.

DR. HOGAN: Along those lines, what we are going to turn to next is—we have taken a subset of these variables and we have now come up with seven strata and we are comparing what we gain or lose by adding variables. So we have two base-line models, one of which is just stratified by race and tenure, and the other just stratified by the 357, and then some alternatives: What would happen if you added this, added this, added this? For each of those alternatives, we are computing variances and trying to get a scale of the synthetic bias. We are trying to answer the kinds of questions, at least within the context of the post-stratification approach we have adopted, that you are asking.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

I think Rod raises, always, some interesting aspects. I will vehemently deny him the one point. He says this may not be a big change. When I look my programmers in the eye and say, “Can you change a semicolon to a colon,” it is a big change. We are programming it now. As I said at the beginning of the session, we are bringing it along. Here is where we are. We have not made final decisions in terms of post-stratification variables and how those are best defined, but we are fairly locked into this basic approach of using post-strata that cut across straight lines. We have made that decision, we discussed it with the panel a while back, and we are not going back across that bridge.

DR. LITTLE: I have a minor point, in terms of the presentation, which follows on with what Joe is saying. Some measure of R2 would be useful in these regressions. There are analogues of R2 for logistic regressions. What percentage of the variability is being explained by these variables? That would be a useful thing. I would rather see that than p-values. That is probably what Joe was saying.

DR. SEDRANSK: I would go further.

DR. BROWN: So would I, but we have to bear in mind that all of this is really a preliminary analysis, and the more definitive answers come from a different sort of analysis than is being carried out now.

DR. SEDRANSK: I think it is worth adding, though, that I would go much further than that. I did not see any measures of quality of fit of the model, for this one or any other one, in all of this documentation. I may be an extremist on this, but I would go much past deviance measures to just convince the professional audience—not Congress, but others—that the models do fit. I agree very much with Larry that this is not the definitive analysis. Later on you are going to go on to check things. But I was really not confident in not seeing anything that these models really fit.

DR. HOGAN: We conceive of these models simply as a way of selecting variables for the next time.

Let’s proceed to what we are going to do next. The next step will be to select some post-stratification approaches and, for each post-stratification approach, simulate, based on 1990 data, a set of estimates for that post-stratum, and then map that back to 1990 51 post-strata groups, and in future research, map that back to cities or states or other geography, calculate the coverage factors you get from the various alternatives, try to predict the variance that you can expect at those levels, based on 1990 data, corrected for the new sample design and sample size for 2000, and also try to get an idea of the synthetic bias.

That is very difficult. To get an idea of the synthetic bias, what we are going to be doing is coming up with a set of target values using the kinds of full-blown, “throw everything but the kitchen sink in it,” that we have been talking about with logistic regressions, even throwing in demographic analysis and whatever else, to give us something that is outside the model we are testing so that we can calibrate—“scale,” perhaps, is a better word—the amount of bias that we might get from this approach.

The reason we are doing this is, the more you pool data, obviously, the lower the variance you get on your factors. So if we only look at the variance side, then

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

we are constantly fooling ourselves that fewer and fewer post-strata variables are better. We are trying to at least make some allowance for the fact that the more you pool your data into just a handful of post-strata, the more synthetic bias feeds back in—although calibrating that, obviously, is quite difficult.

DR. SPENCER: When you say “adding more and more variables in the models,” are you talking about geographic variables or person-characteristic, non-geographic variables?

DR. HOGAN: I am talking about both, but not geographic at the level of the fine-detail block or local-area level. By geographic variable, we throw in region versus division versus census regional office. We do look at urban size, et cetera. We are not throwing in as a separate variable New York City.

DR. SPENCER: Maybe that would be a good thing to do. If you do not put in New York City, you are essentially assuming that your model, which has been fitted from maybe regional data, applies to New York City. The whole issue of synthetic bias is that what holds on the aggregate does not hold for an individual area. So maybe a way to get at it is to put in the particular geographic effects that you are interested in and tease out what happens, and allow for sampling variance of differences.

DR. HOGAN: That might be possible.

DR. ZASLAVSKY: I am not quite sure where you were going with this idea of creating target estimates that bring in a lot of other stuff. I am a little nervous about the idea that you might be selecting a way of doing a post-stratification that is meant to be a good summary of the information in the PES, on the basis of how well it produces something that is external to the PES. In other words, if you manage to get something out that is closer to something else—which is the PES adjusted for, let’s say, some demographic analysis controls, with a certain stratification of the PES itself—that actually is showing that it has bad goodness of fit for the PES, which just happens to match up with some other external source of data, which should have been brought in in your estimation procedure.

There is another approach. I cannot tell enough about what you are doing to say whether it is different or not. It is essentially what is in Eric Schindler’s paper, where you compare synthetic estimates or some model-based estimate to a direct estimate. Even if the direct estimates are noisy—you pick domains where the direct estimates may be noisy, but you can do the appropriate adding and subtracting of estimates of variance to get an estimate of mean squared error, and do that for a number of different types of domains, since this is a multi-objective problem.

I do not know if that sounds to you anything like what you are already planning to do or not.

DR. HOGAN: I am not sure we are going to do that. That is an interesting idea.

DR. LITTLE: From my perspective, the variables that determine the sample design—that is, over which there are differential probabilities of selection—should have a higher priority in terms of being included in the logistic regression. From my perspective, those variables have a special status. I notice that you do not have

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

block size in there, which is a variable, for example, that varies, though the selection probability differs across those variables. From my model-based perspective, the way to deal with variables where you have differential probabilities of selection is to include them as stratifiers and predictors in the regression. So I would like to see the design variables included in the logistic regression rather than taking care of the design effects by weighting.

DR. HOGAN: Let’s remember that the logistic regression here is not the focus of our work. Our goal here is not to come up with the best logistic regression we could; it is to come up with the best post-strata that we are going to use. The best logistic regression that fits 1990 will be quite interesting, but it does not solve our problem, which is the best set of post-strata for 2000.

DR. LITTLE: I guess the simple way of saying what I am saying is that the variables that are stratifying variables in the design should be included in the post-stratification as well.

DR. SPENCER: You could also test it by doing weighted and unweighted, right? Couldn’t you do weighted and unweighted post-stratified estimators? If they differ very much, then you fear that you have omitted one of the variables that is related to the selection probabilities.

DR. HOGAN: Are you talking about in terms of the logistic regression or that when we come up with whatever set of post-strata we do, we will start, say, the 357, and our next cut should be the size of a block? Am I misunderstanding? Is that what you are saying, Rod, or did I completely misunderstand you?

DR. LITTLE: Variables that are included in the stratification have a special status in the analysis.

DR. HOGAN: I guess it is special status, but I do not know that size of the block is a very good predictor of coverage probabilities, probabilities of being undercounted or overcounted. Are they, Bruce?

DR. SPENCER: They were in my model.

DR. ZASLAVSKY: I understand, in general principles, why Rod’s suggestion would make sense, but under the constraints we have I do not think I would introduce that if it meant giving up race or giving up region. I do not know that what you suggest actually could be implemented, given these constraints.

DR. LITTLE: That is probably right.

DR. SEDRANSK: Could you try other definitions of region besides region and census division? In work I have done, those are good, but they are not necessarily definitive. Have you tried other breakdowns?

DR. HOGAN: The only other one we tried was a regional census center, the 13 areas that managed the census in 1990, to which the local offices reported.

DR. SEDRANSK: Is it too late to try?

DR. HOGAN: We have already tried it.

DR. SEDRANSK: I mean try something else. I am not going to tell you what else it is. I think the groupings of census divisions have validity, but there are things—and I am a newcomer to this—in other variables, there are other groupings that might work.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

DR. HOGAN: My experience of this regional variable is that we all sort of have preconceived ideas—you know, the Northeast is harder to count than Detroit— but after you control for, renter, race, size of city, then everything you thought you knew you no longer know. Given that you are dealing with a Hispanic renter, does it help to know it is in Boston or Indianapolis?

I think there is something going on there—and our logistic regression shows it—which may help you with housing style. You do not get brownstone walkups in Phoenix. So there may be something going on. But it is fairly attenuated once you control for everything that gives you your stereotype of why New York City is hard to count.

DR. SEDRANSK: So you look at residuals.

DR. HOGAN: Yes, yes. If you have another cut, we are certainly willing to listen to it. I spent weeks just trying to come up with wonderful new regional patterns, and we decided to stick with the old ones because there is nothing obvious out there.

DR. NORWOOD: I think this might be a good place to stop the discussion for lunch. Let me just say that I think we have covered a lot of ground. Having read all the papers and listened today, I am not surprised that there are all kinds of suggestions thrown out. I think that is what usually happens at meetings like this. I know that many of them you have thought about, and I am sure there is food for thought on some others. The reason we had this workshop now rather than much later is so that you could take advantage of the suggestions and views of other people.

Based on my own experience with advisory groups, I might say that I do not expect you to take advantage of every single suggestion, because then we would be in 2010 by the time you got through. But I think there were a number of things that I am sure you want to think about, and there will be more this afternoon.

CONTINUED DISCUSSION OF POST-STRATIFICATION

DR. NORWOOD: I would like to begin the afternoon session. I want to tell you that I have asked Howard to start out by telling us where they are, because I know that they have two little operations to carry out between now and the end of the census. I think it is important for us to know what is open for change—and there are many things, I know, that are—on which they would like advice, and which things are pretty much settled for now and which will be certainly under great consideration for 2010.

My experience with the census—and I have been through a number of them— is that there is always a future census, and it can always be better because we are always going to solve the problems of the past census. I think it would be useful for us to try to be as practical as we can. I would like to have this workshop produce useful things. I think we got a lot this morning. I think there is more that we can get this afternoon.

Before I turn the floor over to Howard, however, I have been talking to my good friend Joe Waksberg, who always has words of wisdom. I asked him if he might

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

start us off this afternoon with a few general comments about surveys and survey design. You all know that he has had a lot of experience, not just at the Census Bureau, but at a lot of other places. I guess, in a way, he is probably, in good measure, responsible for the political fracas that has developed over the census. My recollection is that it was Joe Waksberg who did some of the original work in trying to measure the undercount, many years ago.

MR. WAKSBERG: I had some general questions that bothered me, which kept me from really drawing any firm conclusions on what to recommend on some things. Let me mention them. I will not go into detailed comments on either post-stratification or sample design now, although I do have some comments. I think it will be more appropriate later. But I thought I would raise a couple of general questions.

The first one relates to—I did not get any clear view of the priorities for the kinds of population undercoverage estimates you are going to have. Let me give you an example. Originally, I assume, the highest priority was to produce state data, because they were going to be used for congressional apportionment and so on. Now that that is no longer a major issue, what are the priorities?

Let me give you my own bias on this. You can talk in terms of geographic ones. Rod mentioned, for example, the importance of getting data for New York City, for the other big cites, although, obviously, there is a limit to how far you can focus your attention on that. Or you can think in terms of the demographic subgroups, trying to improve estimates for black teenagers, Hispanic women, Asian or Pacific Islanders. I guess my own bias is to put a fair amount of stress on the latter in terms of the more general uses that would be made of the data, not only in the census, but for such things as life tables and other things you can think of.

But I did not get a clear picture of that. As a result, issues that were raised in the memos—for example, should you be able to sample minorities? I do not know how to answer it, without knowing more fully what the priorities are. That is issue number one.

Issue number two—I will just pick up something from what Janet said—is the issue of the time schedule. How much time actually is there for further research? When do you have to pin final things down and say this is the end? This is particularly important in the issue of sample design, but also the software for production. You have mentioned difficulties in some of the complex software for using regression methods. The question is, what is your time schedule? When do you have to pin things down?

A third issue: I assume that the basic nature of the sample design you can no longer affect. That is, you can talk about the reduction of the sample, but you cannot talk about the initial sample. It is too late to do that. Are there any other restrictions on what can be done? Those are sort of the general ones. I want to keep the specific issues for the later discussion.

DR. NORWOOD: Mike Cohen has something to say about his view of domains.

DR. COHEN: This is probably something that almost everybody in the audience already realizes, but for those who may be a little bit confused, the term being

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

used is “post-stratification” or “post-strata.” There is a common use of the term in statistics, where you are using population information, independent information, that has been collected to reduce variances through use of weighting to match these independent estimates from a population. That is not how the term is being used for this purpose. For this purpose, instead of using the term “post-strata,” you should think of the term “estimation domains.” These are just domains in which estimates are produced. There is no use of any population totals. I just wanted to reduce any confusion people might have.

DR. NORWOOD: Good. Howard?

DR. HOGAN: As always, Joe raises good questions that require more thought. But priorities, I think, is a good way to start thinking about it.

One thing that is clear is that the reason for the PES, dual-systems, whatever, was to attack the differential undercount. We have other strategies—we are using paid advertising, “Be Counted”, update of local information—to attack all the other coverage errors that we know how to attack. In my mind, the point of the A.C.E. is to attack those kinds of systematic biases that we have never been able to address, regardless of what we have tried, specifically the differential undercount. That is very high on our priority list.

Obviously, when we attack that, we are going to have to carry it down to the geographic areas where the data are actually used. Most uses of the data [raise the issue of] how that differential undercount bias translates into a local area. We have to pay attention, then, in terms of the variance and everything else that we can create in that carrying-down, that we might swamp our ability to address the differential undercount. At least in my mind, that is a very high priority, the differential with respect to ethnic groups. But, of course, that translates into, broadly speaking, disadvantaged groups who systematically, census after census, we have not been able to include using our traditional techniques.

John and Ken are here; they might want to correct my view. But in my thought processes, that is very high and what I see as our role.

DR. PREWITT: As I understand it, we are kind of restricted to the short-form data. That is what we know about. There may be other variables out there that drive the differential undercount that we simply do not have any data on. You may have attitudinal things—civic responsibility, anything you want. So we are in a funny bind where it is the short-form questions against which we have to map the differential undercount.

I would like to come back to Joe for just a second. When you say that your own preference is to downplay region and put more focus on demographic groups, because there are some other criteria that we ought to have in mind—that is, the long-term uses of the data set, not just the differential undercount—it would be useful for me to hear you spell out these other criteria, how importantly we ought to be weighting these other criteria, against the problem of measuring as best we can with the short-form data. What are the criteria that drive you to say that we ought to be focusing on those variables anyway?

MR. WAKSBERG: I can give you a number of them, but mostly because over the full decade adjusted census data, used to make good population estimates, are

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

used over and over again, nationwide—as the controls for the Current Population Survey, for almost all sample surveys. They are used for analysis of different kinds of rates—crime victimization rates, the denominators. Typically, they are adjusted census figures, because they make a lot more sense than the unadjusted ones.

The same thing is true for life expectancy. It makes much more sense, for causes of death and things of that sort, to use adjusted figures for the denominators.

This does not mean that you do not produce adjusted figures for geography as well. But it is essentially an issue of priorities. Where do you want to put the emphasis? It probably more strongly affects the nature of the sample design than it does the estimation method. If you want to get decent figures on Asian and Pacific Islanders, you have to increase the sampling rate. If that is less a priority issue, then you would draw that more in terms of geographic distribution. If you are concerned about the geography and you want to put a minimum number of cases by state or big-cities—New York, St. Louis, Chicago, places of that sort—it is going to increase the variance on your estimate of the national undercount very seriously. You should not do it without really thinking that this is what you want to do.

DR. NORWOOD: That rather bears out what I said at the beginning, before you came in. Because of the focus that has been building up for some years, but in particular with this census, on the political uses of the census, there is precious little discussion about the uses of the census at higher levels of aggregation. When I was at the Department of Labor, we used to use the census all the time for a whole lot of analytical purposes. To adjust the counts of the Current Population Survey, you certainly do not need the same level of disaggregation that you need if you are going to do something for some of the other, more political uses.

So I think one needs to keep this in balance somehow. It worries me that often, because of the political rhetoric, we do not pay sufficient attention to all of these other very important uses, which may be more long-lasting than the political repercussions.

MR. WAKSBERG: Maybe I am colored by the fact that almost all the surveys that Westat now does for the government require special emphasis on at least the major minorities, both the sampling of blacks and Hispanics. It is obvious that the analysis now really requires looking at these as subgroups. I think that should be taken into account here.

DR. NORWOOD: Howard, I will turn it over to you.

DR. HOGAN: Let me put the two things together, time for further research and what is still open for grabs.

Certainly, just about anything on the ground and operational is pretty close to being locked in, because we have to integrate with the rest of the census—their processing schedule, our processing schedule, interviewers—in terms of our basic operations, listing, housing-unit matches, et cetera.

In terms of this morning’s topic, post-stratification, we are pretty well set in terms of using post-strata as opposed to a regression approach, probably something between 357 and 700, somewhere in that range—maybe a little bit more, but

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

around that range. Probably we will be using race, age, and sex. We can talk about tenure. There would have to be some fairly exciting stuff coming out before we would drop tenure.

Joe knows probably as well as anybody that with census taking, it gets late early. We have already drawn our listing sample. We did that last summer. We found out really early that the field folks had to know where our sample was going to be so they could rent the local space in time to install the phone lines, in time to hire the interviewers. Really early, they said, “Hey, we have to know right now because we have a lot of contracts.” So we are right now in the process of writing draft specifications to our programmers on post-stratification. It is easy to include more variables than you will finally use. You say, “I might use that variable. I had better make sure that it is on the file that I want to use.” The other way does not work at all. It is easy to drop variables; it is nearly impossible to add them.

The geography variable, obviously, we are going to be using. We need to have discussions with the people who code that to make sure they do it in time, to make sure they know they have to do it in time.

So all this happens very early.

But in terms of post-stratification, really, what we need from this group is— here are the variables we are looking at. They seem sensible to us. What are your comments? Here is the process we are going through to understand them. Is that a sensible process? Here is how we go about choosing. If you have any better ideas, we would really like your thoughts on that. Then, do you have any favorites? But it is within the context of the design we have.

On the sampling, right now what we are listing is the universe. Working with our field division, it is clear that allocation to the states—we could go back and fudge it a little bit, but it is pretty well fixed. How within the state we allocate is pretty much open. We have not committed ourselves one way or the other. There we really would like advice and information. But, of course, that sampling has to be done in time to mount the interviews in summer 2000.

On something like the missing data, where we are not going to run it until early 2001 and we are very early in our research, we would very much like to have advice and thoughts. We are not locked in.

So on some of these estimation issues, some missing data are fairly open, some sampling—Donna, what is the date we have to have locked in our sampling?

MS. KOSTANICH: We are actually going to be in production in December 2000.

DR. HOGAN: Fairly soon, because we have to lock in that sample reduction before we go to preliminary housing-unit matching. That takes place in January 2001.

I am sure there are lots of restrictions. As we get into our discussion, more and more will occur to me. But that is the only one that is sort of glaringly obvious, that we are locked into the sample listing and we are locked into this basic design of the estimation system and, of course, the variance estimation system.

What we are not locked into is how we go about defining our estimation domains, the process and what they finally look like. That is what I would like to get out of today from this group.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

Let me talk about how we are going about some of this. The stuff we talked about earlier, especially the logistic regression, was just to kind of understand the variables and select a set to work with. We are going to plug them into a set of post-stratification approaches and see how they would have worked on the 1990 data. The steps of this are laid out in Q-5 [Griffin, 1999].

We have come up with a set of seven designs. For every post-stratification design, we have the results from 1990, and we can compute dual-systems estimates or whatever. We have the census results for that post-stratification design, and so we can come up with a set of coverage factors that we can use for carrying-down. Coverage factor I-A is for estimation domain, post-strata I, for the whole approach, A. A is the approach. We can then apply that and map it back to the 1990 post-strata groups. At least in the 1990 51 groups, there is a framework that we are going to look at. As I said earlier, we are going to look at other frameworks as well, such as geography.

So we have now what this post-strata approach would predict in terms of the 51 groups. Then the question is, how good is that prediction? That has two parts, one of which is, what is the variance of that prediction? For that, we need to essentially do two things. We need to go back to 1990 and compute the 1990 variance on that approach. Having applied that approach to the 1990 PES, what would the variance have been? But the same design in 2000 is quite different from the 1990 sample design. It is larger, but also we are working very hard to avoid some of the highly differential weights we had in 1990. So we need to translate that variance into what we might expect in 2000.

I am not going to walk through all the formulas here. Essentially, what we do is take the 1990 weights, the 1990 variances, and get back to a unit weight. This is at the bottom of page 3 [Griffin, 1999]. Let me point out one little thing. If you have your reading glasses on, you can see that that is not 1.56 squared; it is 1.56 sigma squared.

The 1.56 is worth mentioning here. The sigma is the unit variance. We have our weights. We know 2000 is going to be different from 1990. Not just the A.C.E. design, but the census is going to be different. So we have done some plots and simulations and good guesswork in saying our variances may well indeed be somewhat higher than they were in 1990, because of some of the changes in the A.C.E. design. We are putting in 1.56 as an inflation factor, sort of protection if our unit variances should go up.

Working back slowly, we can come back to the types of variances you would expect for 2000 for a post-stratification alternative. Every post-stratification alternative has its own dimensions. We have to map them back into a common alternative. One common alternative is the 51 post-strata groups. Some of the results I will show you are those. Other alternatives we need to work out are state, city, or something else.

But in terms of coming up with the post-stratum factor, it is not exactly simple, but pretty straightforward statistical kinds of arguments. Then the tricky part is how to make some allowance for the synthetic bias in our thought process. Obviously, if you ignored the synthetic bias, then your thoughts would always run

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

to very, very few estimation domains, they would have very low variance, and they would always look the best. In thinking through this question, how do you balance that against the synthetic bias?

To make that allowance, we are building some target populations. Now we go back to some of the logistic regressions we were talking about earlier. But the purpose here is to come up with a target population to help us calibrate the probable scale of the synthetic bias. (This is in the same Q-5 memo, pages 5 and 6 [Griffin, 1999].) Basically, we will use logistic regression to predict coverage probabilities. Then we will also model the probability of correct enumeration, combine those into micro cells, and come up with the targeted population.

If you are interested in how we were doing some of this, there are a couple of very good papers by Bill Bell.

We are also bringing in at this point the results of some of the demographic analysis. I guess it was Alan Zaslavsky who suggested that that may not be a good thing to do. I think that was a good point that we will have to think further on. But at least in the research that we are now doing, we are bringing in the demographic analysis results. This makes our target, in a sense, fairly different from what we are trying to measure, what we are trying to measure it against. The more things you can throw in this—you may not have truth here, but at least you will not be completely circular. That was our thought process.

So we will have the target, with its warts and flaws, and we have what any alternative suggests. Then we can come up with a measure of mean squared error. We have the variance. We can then, by residual method, with all the weaknesses that entails, come up with a bias.

Those are the pieces of information that we can use in thinking through this process. We are certainly open to other suggestions about better ways of thinking through the process.

Before I go on to show you some of the results of this, let me again remind you that all of this is predicated on 1990 data. At some point, we are going to have to make the leap of faith—or what I prefer to call the leap of professional judgment— that says, “I know how census 2000 has changed. I know that this variable means something different than it used to mean. I know that this particular activity was changed to correct this kind of problem we had in 1990.” Therefore, bringing in the information we get from this systematic stuff, we still have to say, “What about census 2000? How is it different? How is that likely to influence our choice of post-strata?”

Beyond professional judgment, I do not have any good way of doing that. A key question there will be the multiple-race responses. You cannot get anything out of that from 1990. You can study it all you want. It is not going to help you. At some point you are going to have to say, “What do I know, perhaps from dress rehearsal, perhaps from other studies, perhaps from sociology, about the proper treatment of this?”

Other examples are types of enumeration areas, which I mentioned earlier. The words may be the same between 1990 and 2000, but the areas covered are much larger.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

Before I go into some of the results, I know some of you—maybe all of you— have worked through this basic approach. I would like to get your comments on that. Is this a good approach? Is there a better approach?

DR. SPENCER: The evaluation that you are doing is predicated on what you call the target being an unbiased estimator. It is a logistic regression estimate.

DR. HOGAN: A not-too-biased estimate of the scale of the bias term. I do not think we are too concerned in this, as we are trying to estimate mean squared error, that we have misestimated the direction. If we have estimated on a completely wrong dimension, then it would mislead us.

DR. SPENCER: You are using it to estimate the squared bias.

DR. HOGAN: Yes.

DR. SPENCER: And you want to do post-stratification, both because, if you could get luckier than anybody could get, you would get rid of synthetic estimation bias and you would get rid of correlation bias. Your sex ratios would match demographic analysis and so forth. That is not rflected in this evaluation, the correlation bias aspect. Even under the Alho-type model [that developed by J.M. Alho], you still do not match the sex ratios from demographic analysis.

DR. HOGAN: That is correct.

DR. SPENCER: So you are not picking up that component of bias in the analysis.

DR. HOGAN: Except for the step that we are applying to the target of using the sex ratios to adjust the target.

DR. SPENCER: I missed that. Thank you.

DR. HOGAN: We are throwing that in. For the purposes of kind of playing this game with the targets, we are bringing that in.

DR. SPENCER: Then I think that is excellent. That is my comment.

DR. ZASLAVSKY: I stated my position before, but I never object to saying it again. What you are going to do in the real world is, you are going to go through the synthetic estimation exercise and then, if you want to try to match demographic estimates, you will do another thing to match demographic estimates. What I would want to see from this part of the evaluation is how well your post-stratification does what it is mostly supposed to do, which is to get together things which are similar and separate things which are different, and do that along lines that coincide with some of this long list of possible kinds of domains you would want to estimate for—basically, the criteria that Joe was talking about—so that you can, at the end, do synthetic estimation and think that you caught the main things.

To try to also be going after several other things that may go in the same directions or not, it seems to me, confuses the issue. I would rather see you do comparison—use the same formula as on page 7 of that memo [Griffin, 1999]— for direct estimates. Although direct estimates for small domains are very inaccurate, when you aggregate over many of them to get an overall mean squared error estimate, it is something that you can use for evaluation. You never know which particular domains are well estimated, but you do have some overall estimates.

DR. SPENCER: How do you do the direct estimate? Do you post-stratify?

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

DR. ZASLAVSKY: There are two issues. I understand what you are asking. Your question really gets to the fact that we are using the post-strata for two reasons. One of them is to get more homogeneity, to get rid of correlation bias. The other is to have a unit we can use to carry things down. The units used for those two things do not have to be the same. If we are doing a direct estimate for something that is big enough—for example, if we are doing a direct estimate for New York—you probably do want to do some post-stratification within, because you think there might be a substantial amount of correlation bias in a place that has a mixture of very high and low undercounts, like New York. If you are doing it for something that is more homogeneous, you do not need to.

DR. SEDRANSK: Let me say something modestly favorable about that target estimate. This was a conversation with Bill Bell. When I was reading over one of these memos last night, I thought it was interesting, but, of course, full of assumptions. I think there would be some purpose, in at least checking sensitivity, to do this exercise trying to see how sensitive it is to changes in the assumptions. What I am saying is, in addition, you need a full panoply of things to try out. If you are going to do that, just do not do it without looking at some changes in the assumptions you made. There are a bunch of things that are not testable. You can try to put in some other assumptions.

DR. HOGAN: I think we can do that to a limited extent. Answering two questions at once—Joe Sedransk asked what else is off the table. One thing that is clearly off the table is using demographic analysis sex ratios in the production estimates, for reasons, as you state, that we have a lot of stuff that is fairly newly specified and is untestable. But in terms of coming up with alternative targets to see if our decision is unduly driven by either including a post-stratum or not including it, I think that should be doable, especially if we only use it for the last few choices or to validate our final model.

DR. SEDRANSK: That is what I am suggesting.

DR. HOGAN: Any other issues on the basic approach? If not, we can get to some of the results. That is Q-7 [Schindler, 1999].

This is our first pass-through in terms of getting beyond the logistic regression and trying out actual post-stratification alternatives. We are focusing on post-strata defined by age/sex, tenure, the urban size variable, region or regional census center, percent minority, mail return. These are variables that the earlier logistic regression suggested would be worth looking at.

We define several models. The first model is sort of a middle model. It is the one that is going to give the lowest sampling variance. That is just race, ethnic origin, by age, sex, and tenure. It is a simple model.

The next model is basically what we did in 1990, the 357 post-strata model, which brings in race, sex, ethnic origin, tenure, region, and urban, but combines cells for various minority groups.

Then there are some other ones. I am not going to spend a lot of time walking through each one. For example, in model three we bring in mail-return rate for some of the variables. For four, we bring in regional census center; then five, region and percent minority; six, regional census center, mail return. So there are various

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

combinations of the variables I talked about—region or regional census center, mail response rate, urbanicity, percent minority, together with tenure, age, race, sex.

For each of those, we go through a process. If you turn to page 11 [Schindler, 1999], I need to point out something fairly important. Page 11 gives you the Model 1 results implied from 1990. They will look different from what you are used to seeing, because we really have not brought in one of the dimensions of the net undercount, which is the census whole-person imputations. We are just dealing with an internal file. By including census whole-person imputations, that tends to lower the overall undercount. These numbers are a little bit higher than if we had brought in the full model.

So if some of these numbers look different, they are different. We will at some point try to bring in the full census count. We just have not done that yet— including the whole-person imputations. So if you look at Table 2b on page 11 [Schindler, 1999] and compare it to what we came out with in 1990, it will look different. You will not be able to map it back.

PARTICIPANT: Could you clarify the terms “higher” and “lower”? When you say “higher,” do you mean a bigger number here or actually a smaller number? I just want to make sure I get my direction right.

DR. HOGAN: Let me see exactly how this works. Including whole-person imputations—it is going to be a lower undercount.

PARTICIPANT: Absolute value will be a smaller number.

DR. EDDY: Do you mean absolute value or do you mean an absolute shift?

DR. HOGAN: Shift.

DR. EDDY: It is adding a negative amount.

DR. HOGAN: Bob Bell and Bill Eddy are saying no.

PARTICIPANT: If you were to compare [Table] 2b [Schindler, 1999] to the actual results with whole-person imputations, would these numbers be on the high side or the low side?

DR. HOGAN: High. The undercounts here are bigger than the undercounts in 1990. I will work out the equations for you.

DR. EDDY: Just give us a one-sentence summary of all of that.

DR. SPENCER: It does not matter. Do not compare these to the numbers that you are familiar with from 1990.

DR. HOGAN: That is a one-sentence summary, a very good one.

DR. EDDY: I have a follow-up question: What should I compare these numbers to?

DR. SPENCER: Each other, model one to model two.

DR. HOGAN: If you walk through here, you can see the results of all the models, but you cannot compare them because each model has its own estimation domains.

DR. EDDY: I am sorry, that contradicts what you just told me.

DR. HOGAN: I should have said that it is very difficult to compare the pages to pages. It is very difficult to compare, say, Table 2a to 2b [Schindler, 1999], because they slice the population differently. So we would like to map them back onto a

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

common denominator for comparison. That is what goes into this, mapping all these models back into a common dimension, that common dimension being the 51 post-strata groups used in 1990.

Actually, it is easier to start with Table 3 [Schindler, 1999] rather than the chart. The first row here—non-Hispanic white and other races/owners/Northeast/large urbanized areas—it has the coverage factors for each of the seven models. If you go back to the chart, you will see that plotted. This is our best way so far of displaying the results, improved somewhat based on some suggestions we received earlier. Each clump of seven is the results from the seven different models for the same 1990 demographic group. You can see from this, first, how much the seven models give you different variances and different factors compared to each other for the same group, and compare that variation relative to the differences between group clusters.

For example, the minimum model, the one that does not bring in very much besides age, sex, and tenure, is pretty close to one. It is obviously true right across every clump. But within the clumps, there are some variations. You can compare the size of those variations to how different all the white owners are to all the white renters. Are the differences between models important relative to the differences between groups as measured, regardless of model?

That is what we are really trying to display in this table: How important are the choices relative to each other, but also relative to the real differences? That is the point here.

DR. NORWOOD: Are you surprised that there are not more below the line?

DR. HOGAN: No. In 1990, there were a few below the line, but there were only a handful of post-strata that were below the line.

DR. BROWN: I have lost track of what we are supposed to be thinking of in this table as being desirable. If I look down the column for model five, would I like to see very variable coverage factors or very similar ones?

DR. HOGAN: If you are comparing within one model, what you would like, of course, is for the different ones to be quite different from each other. You want it to be picking up real differences in the population.

DR. ZASLAVSKY: But I think the answer to Larry’s question is that, as you go from very little post-stratification to very much, you are going to go from very little variation down the column to a lot of variation down the column, and either extreme is wrong. You cannot tell by just looking along that dimension whether there is somewhere in the middle where you have brought down bias and you have not down variance. You cannot tell from that.

DR. BROWN: Right, so what I really need is the results of the preceding analysis that gives me an estimate of mean squared error.

DR. ZASLAVSKY: Exactly.

DR. HOGAN: And we do not have that for you.

What you could also infer from this—I will let you make your own inferences— whether or not you believe that the different models are all that different and that bringing in some of these things is important or not important, because we are trying to display both the level and some allowance for the variance—if there is

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

any model that gives you differences that you thought were very important, why is it giving you that difference? Is it a quirk in the variance or is it picking up a dimension of the undercount that the other models were missing?

This is one stage of the analysis. Because it is mapping back the alternatives to the 51 groups, we also have to map them back as state or city or other political geography.

DR. EDDY: Larry just asked what he was supposed to be thinking. I really did not quite understand what you told him. But I do not care what you were supposed to be thinking. I would be much more interested in knowing what you think when you look at this.

DR. HOGAN: I have not studied all of the details, but we are only swapping one or two variables. One would not expect wildly different things. But they are fairly similar. The choices of the variables that are available to us are not implying huge differences in the output. That may change when we move from the 51 groups to the states, because, of course, we are mapping back to the 51 groups, which carry a lot of the same dimensions as our alternatives.

DR. EDDY: Right. But when you say that, are you essentially looking across a row and saying, “Well, it doesn’t change too much as I change from one model to the next,” or are you looking at a group of rows and saying, “I’ll look at this block of factors as I change from model to model”? When I do that, I think I see changes. That is, rather than looking at some individual coefficient and asking how it is varying, look at a bunch of coefficients and ask how they are changing relative to each other.

DR. HOGAN: The kind of stuff that I do would be, for example, looking at this, where, for every little clump, I can see the variance between models in the clump and say, “Well, that is scaled about so big,” and then the differences between these clumps and these are much bigger, indicating, at least initially, that renter still is a very powerful variable that is encapsulated in all of this, but some of the other variations are not giving us a whole new dimension. So that is sort of how I do it initially—but there are, of course, other approaches—sort of within and between analysis. The differences within the models were very large compared to what we did in 1990, which was tenure and everything else. That would say we are really bringing in something that is tremendously either exciting or worrisome. This is saying we are bringing in some interesting stuff, but not enough to say that the whole basic 357 foundation we are building on needs to be rethought.

But I want to hear your thoughts, too. If it was absolutely crystal-clear, I would have come in and said that this is one other thing that is locked down.

DR. ZASLAVSKY: Your domains that you are looking at for evaluation are only the ones that come out of the 357 stratification.

DR. HOGAN: That is right.

DR. ZASLAVSKY: So how could you ever find out whether one of these other models is bringing in something that is missed by the 357 stratification? Maybe there is an immense difference between the towns whose names begin with A through F compared to towns whose names begin with G through Z. You never see that here.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

DR. HOGAN: No, you are not going to see that here.

DR. ZASLAVSKY: Even if you fit the model and saw that there was a huge effect.

DR. HOGAN: You are absolutely right. This is step one. Steps two and three we have already laid out, which are to map this back to, say, states or cities, which are not intimately tied with our own creations. That would be tremendously important. There may be other dimensions—alphabetical, if you like—or other ways we can cut it. Once we have created the synthetic universe, we can then use whichever way is most useful.

DR. ZASLAVSKY: What this would do is tell you if there was a model that was missing something that is in the 357 stratification. If it is missing something, then it is going to be like taking two post-strata that look different and collapsing them together. But it is not going to tell you whether the 357 stratification is missing something beyond that.

DR. HOGAN: Right.

DR. SEDRANSK: I might just comment on the obvious. Again, this is just part of the process of looking at coverage factors themselves and they tell you what is going to happen. Looking at this—I have been sort of staring at it for five minutes—it looks as if there is not very much difference in the models, but there could be little differences if it is weighted in different ways. I think we need to go past this stage and map these graphs. I presume you are not going to make decisions based on these.

DR. HOGAN: No, no. I think the real exciting graph is when we crank this out, say, for states or the 16 largest cities or something, where we have not created the grouping, and then have the differences between models relative to the differences between states, that kind of analysis. Then that really tells us that the choice of models can be very important. On the other hand, if all the states pretty much come up the same way regardless of model, then the choice of models is less important, within the class of models we are looking at. It is the next two steps, and if there is a third step that Alan wants to suggest, how best to summarize these and look at them—we do want to look at race. That is a very important dimension. Political geography is a very important dimension. Are there other dimensions we should be thinking about?

DR. SPENCER: It might be useful for you to define some summary statistics to go along either with the tables or the graphs, which capture what you are looking for. We are not with it enough to know what to look for, and we have been getting guidance.

DR. HOGAN: Think about good ones. Yes, that would be helpful.

DR. LITTLE: It looks as if there are big differences in the standard errors.

DR. HOGAN: Yes, that is definitely true.

DR. LITTLE: Do you have any comments about that?

DR. HOGAN: To some extent, it was expected, of course, because the minimum model will have the smallest standard error, and as we throw in more and more stuff, it is going to, to some extent, expand. To the extent we are sharing information from other groups here, we are going to reduce the standard error.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

But the other thing, of course, is that to the extent to which we have chosen truly homogeneous post-stratification within these groupings, then our unit variance within that alternative would be smaller, and we should capture some of that within there. Some of it was absolutely predictable. But some of the others, where they are much smaller, are quite interesting. We need to focus on that.

DR. BROWN: This is kind of a technical issue, but it may play out in going ahead with this. Isn’t it true that in the kind of risk comparison that you are ultimately contemplating making, where you do something like this, where you add up the variance and the bias, won’t it affect greatly the balance between the importance of the variance term? The geographical population size of the target blocks that you break it up into should affect the relative importance of the variance and bias squared terms.

DR. HOGAN: I think that is correct, yes.

DR. BROWN: And so it may end up with very different kinds of results in terms of comparisons of the models.

DR. HOGAN: It may. It would be interesting if one model, as it summed up through higher levels of geography, reacted quite differently from another model. Within model, going from a smaller to a larger geography, you would expect them to shrink. It would be interesting if, as you summed up for one model, they shrank a lot faster in the other model; they kind of plateaued. We have not gotten to that stage of the analysis. I have not started thinking about that. But indeed, if that were the case, the one where it shrank would be quite good.

As I sum up, I can understand how both would kind of shrink, but it is not intuitive as to why one would shrink a lot faster than the other, except if it is a much better slicing of the pie.

DR. BROWN: It depends on what sort of bias it is. But bias does not have to shrink as you aggregate, whereas coefficients of variation do shrink.

DR. HOGAN: That is right.

DR. ZASLAVSKY: If you are just taking a linear combination of post-strata...

DR. HOGAN: If you are going up in geography, it is bringing in areas that are dependent on other post-strata. But also to the extent that, as you summed up, you got closer and closer to the domains that you are directly estimating, then you would expect the synthetic bias also to shrink. So both are going on.

DR. KALTON: May I ask a question about the next step, which, as you said, is to aggregate up in some sense? You gave the example of states. What sort of aggregations are the ones you are going to look at? One kind of principle would be to say that you want to have aggregations that have meaningful analytic importance— people analyze data these ways and so forth. But there is another kind of way in which you could look at it, which would be to think about what sort of aggregations might test out the models in some way—think about things that may be related to the coverage rates, and what have you, and see whether you aggregate that way. For example—I do not know whether this makes any sense—you could think about areas that have had rapid growth, and so you get estimates for aggregations of rapid growth versus not, with the idea that this may be related in some way to how the census will work in those areas.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

So there seem to be two possible principles that could be applied to test this. I wonder whether you have thought about what aggregations you are looking for.

DR. HOGAN: I do not think we have thought about it, but now that you raise the issue, we should and we will. I think we have mainly been focusing on how this plays out in political geography. But I think, in terms of its ability to predict the undercount in rapid growth areas, we would be very interested in that. I think some of the undercount in 1990 is explainable in terms of growth in the late eighties, which none of our post-strata take into account. That is a dimension we had not thought about, and I think we will.

DR. PREWITT: One of the questions that I would like for you to reflect upon is, to what extent should we take into account what we know about 2000 census operations? Hypothesize that part of what drives the undercount among the Hispanic population is linguistic isolation. We have designed a census in 2000 that is altogether more linguistically friendly, the questionnaires and questionnaire assistance centers, bilingual enumerators, and so forth and so on. Do we take that into account or not?

Maybe what drives the undercount among the American Indians on the reservations is isolation. They do not have phones, they are hard to find, and so forth. But for 2000, we have put in place a quite extensive apparatus to work with tribal chiefs, tribal leaders, and so forth, to try to compensate that geographic isolation.

As we move from the 1990 data to the actual 2000 design, to what extent should we build into our thinking our own design? Maybe what drives the undercount among African Americans is education levels. But we do not have anything in the census that compensates for that, unlike for linguistic isolation, where we can do things. What drives it among the Asians is partly this fear of deportation, and we have put in a lot of effort to getting the message out about confidentiality and getting INS to write letters that say they are protected.

There is a whole series of things we are trying to do about the differential undercount. How do we think about that?

DR. SPENCER: If labor were not an issue, I would do sensitivity analyses. There is one simple thing you could do. Since you are simulating this off of 1990 data, if you thought that you were going to reduce the undercoverage of Hispanics by half, you could take half of the nonmatches for Hispanics and recode them as matches and then rerun your estimates, and then try your different models and see how they change.

DR. ZASLAVSKY: I wish I could be that optimistic, but I would base my design on the assumption that all of the problems of undercount that existed in 1990 will still be there in 2000. If you can measure them and show that they have started to go away because of the improvements you have made in that census effort, then good, show that. But do not assume it.

DR. SPENCER: Among other things, you would like to show it. Rather than assuming that it is gone, you would rather have nice-sized PES post-strata there so that you could actually verify.

DR. NORWOOD: Let me just ask a rather strange question. One of the things that troubles me—and I understand that you have to do certain things because that

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

is all the data you have—having looked at and analyzed data over several decades, my impression is that there are enormous social changes that have occurred in this country, certainly between 1990 and now. Aren’t we kind of building in a lag in this, in part because you are basing so much on 1990? I realize you will adjust it as you get into 2000 and can do so. Wouldn’t you be better off if you used some of the data from other surveys to give you some better feel for the social changes that are occurring?

DR. HOGAN: It is a good question. I am not sure how we would bring in all of that.

DR. NORWOOD: It may be a question for 2010, I know. But it troubles me.

DR. HOGAN: There are some areas where we are doing that, but they are fairly limited. For example, I mentioned the multiple-race option. We have no data at all from 1990. We do have some tests we ran earlier in the decade, we have dress rehearsals, and we also have some studies on birth registration data, none of which directly answers how these people can be captured in the census. But it gives us data that people use to think through the problem.

So we are doing that to a limited extent. Perhaps we can think of ways to do more, but only to a limited extent.

DR. SPENCER: Getting back to the question of what to do about possible changes in undercoverage rates, it is not clear to me that you have to have the exact set of post-strata locked in before you analyze the census data. I guess this is a question. If you do not—and you said that you could have additional variables in the programs and then decide not to use them—that gives some practical benefit to allowing for these possible variations. You can say, “Well, maybe we will build in a higher-dimensional post-stratification than we will really use.” If you are not surprised, if things go according to plan and undercoverage rates are reduced, that may shift us to one post-stratification, and if things are the way they were in 1990, then we will go with a different post-stratification. You can have that option to use in real time. It is not clear to me whether that is an option or not.

DR. NORWOOD: You have to pre-specify everything you are doing.

DR. HOGAN: You have to pre-specify in a couple of senses. The real specifications of the computer program are set up for what you might look at and how you might collapse it. We faced this problem back in the early nineties, after we had finished the undercount adjustment for the post-censal estimates. You have to be very careful, obviously, in looking at how the undercounts happen to come out and then choosing how to collapse, because that makes it virtually impossible to estimate the variance. So when we developed the current very popular 357, we knew what the undercount was, so it was not completely na?ve. We focused on census mail-back rates, other things besides the census, to develop our strategy, and then we brought it over as a chunk, so we could at least do variances. Doing that on the fly, both politically in terms of programming and statistical honesty, would be exceedingly difficult. I have one other thing on the...

DR. EDDY: Before you do, I wanted to ask a very technical, but I think maybe important, point about the graphs. If I understand what you have plotted here, this is E to something or other, where the something or other is a regression. These

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

are the predicted values and intervals around the predicted values—are they not? What are they?

DR. HOGAN: They are the coverage factors.

DR. EDDY: They came from a logistic regression, right?

PARTICIPANT: No. They came from these post-stratifications.

DR. EDDY: I guess what I am troubled by is the fact that the intervals are symmetric, and it seems to me they ought to be asymmetric, because the natural units to have done this in would have been logarithms.

DR. HOGAN: What is plotted there is the coverage factor, which is the dual-systems estimate divided by the census. So it is naturally centered around 1, which is perfect coverage. What is below 1 is overcount; what is above 1 is undercount.

DR. NORWOOD: We are going to give Howard about 30 seconds, and then we are going to have a break for 15 minutes. Then we are going to have Howard finish up rather quickly, because I want to leave time to go through this whole group of people facing us and have each of you tell us what you think. I think that would be useful.

DR. HOGAN: I am virtually done with everything I have to say about post-stratification, except I want to point out—you do not have to read it today—Q-8 [Fay, 1999], which is where Bob Fay has worked out some of the issues on ratio bias and some of the dangers of bringing in too many variables. It is there if you want to review it. That is about all I wanted to say.

DR. NORWOOD: I am going to ask Howard to discuss briefly the sample design, and then move into, perhaps, the missing data. Then I am going to ask each of our visiting experts to make any comments that they want to share with us, and then other people on the panel.

REMAINING ISSUES FOR A.C.E. SAMPLE DESIGN

DR. HOGAN: I am going to talk very briefly about our next step on the sampling, which is coming from the state allocations that we have assigned to the states, how we distribute them within the states. We have our big listing sample right now. We have already, in the previous step, decided how much of that will go to each state, when we move from the 2 million we have listed to the 300,000 we will interview. Then we have to allocate them within the states.

DR. ZASLAVSKY: Could you just say quickly how you did that first step? I know this is already decided, but what is the logic of how you got to the states?

DR. HOGAN: We assumed proportional within the states, and we simulated various carrying-down things in terms of supporting the 357 post-strata. We obviously had the cart before the horse here. We had to come up with our sampling plan before we knew our post-stratification plan. We assumed proportional allocations. We simulated various allocations to states and then figured out the properties of the synthetic estimate carried back to the states and also the congressional districts and other areas. We looked, because the panel suggested it, at making sure there was a minimum sample size per state. The suggestion was that if you can do that and it does not cost too much, it would be a good thing to do. We

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

could do it up to about 1,800 without costing us much. We oversampled Hawaii a little bit.

That is, very quickly, the process that got us here. Now we have to allocate within the states. There are essentially two dimensions here that we want to take into account: the demographic composition of the block—and by that I mean essentially the demographic composition in 1990—and the second dimension, our new measures of size. Our initial measure of size was the census Master Address File of last summer. We now have the updated census Master Address File, and we have our A.C.E. listing number. So we now have two additional measures of size for the blocks.

What we are looking at here is, first, classifying the blocks, minority/non-minority, based on the 1990 composition. In the thing we gave you—R-19 [Mule, 1999] is what I will be talking about—we have a couple of propositions of how we might map them in. It turns out that we have looked at them both, and they do not make too much difference. So we will probably go with the one in Table 1, which is similar to what we did in 1990. This is on page 4 of R-19 [Mule, 1999]. We are classifying blocks according to whether they contain significant minority numbers.

We also have the results of our new measures of size, which essentially can be broken into three categories here: the census and the A.C.E. basically agree on the number of housing units there; the census lists a lot more than the A.C.E. listed; or the A.C.E. lists a lot more than the census listed.

You can sort of think through the problem. If we have a non-minority block, a white block, for example, and the census and the A.C.E. agree, then things are looking good. If we have a block where the census has 500 housing units and we only found five, or vice versa, then that could be indicative of a coverage problem or a geocoding problem.

So in drawing our sample, we are going to look at the dimensions of size, minority/non-minority, and draw our sample from there.

We also want to make sure that the weight variations of this second-stage sampling are not too dichotomous. We do not want to tremendously oversample these relative to the others, because, certainly for the minorities, we are dealing with very old data, 10 years old. We want to make sure that when we optimize our sampling probabilities, we do not differentially estimate too much.

We have looked at a number of options. If you turn to Attachment 1, I can take you through them very quickly. Attachment 1 of R-19 [Mule, 1999] just gives you the differential sampling factors between the minority and the non-minority. Proportional allocation—there is no differential sampling. Optimal groups—the next two are just two ways of measuring minority, but it is set up to optimize equal coefficients of variation [CVs] or optimize the sum of the CVs for the 51 post-strata groups.

The next one emphasizes minorities. We really are putting a lot of sample into getting very good minority—in other words, trying to have their CVs as in non-minority representation.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

Finally, we asked, how good can it get if we keep all the minority blocks? That is the most we could do. If you look at that, you can see that it is probably not very good. The differential sampling weights run pretty high—3, 4, 5. The middle two columns here do not look too bad, with differential oversampling weights of 1 to 2. A few of them get up to 3. We might want to cap that at 3, so we never sample hard-to-count groups by a factor of more than 3.

That is where we are going on the carrying-down. We are still in the stage where ideas and suggestions within this paradigm, within this framework, are possible. If I were to choose yesterday, it probably would have been the second column here.

So that is, very pithily—maybe too pithily—where we stand on the sample.

DR. KALTON: May I ask why you do it by state? According to this, you would have all these different rates. Each state would have a different rate. Why would you do that?

DR. HOGAN: Why would we have differential within the state?

DR. SPENCER: I think he asked why differences across states.

DR. KALTON: Why do you have differences across states? You said you are not terribly interested in getting the state estimates.

DR. HOGAN: We have already drawn our listing sample, which was designed to have a state estimate. What we are out there listing now is a state sample that oversampled some states, because at the time we designed it, we attempted to have stand-alone state estimates. Early in the process, we allocated that to the states, keeping a minimum sample size per state, but otherwise optimizing.

DR. KALTON: Why? I know the panel recommended it, but it is not clear why one would recommend it.

DR. HOGAN: You might want to ask them why they recommended it. What we heard from the panel was, first, it is good to make sure that every state at least looks like it has an adequate sample, even if it is a synthetic estimate that you ultimately use. It is sort of protection if things are more geographic than your sample design had anticipated. Also I think at least some of the panel had hoped to use this file to compute composite estimates—maybe not for the production, but for later research or for evaluation of our work. They wanted to make sure that we had enough per state to allow that line of research to go forward, if it did not cost us much. We looked at it, and we could do it without giving up much.

DR. KALTON: But the second reason you gave is not going to actually occur, the second one being that if you found something geographic, you would do something about it. The samples do not sound as if they are big enough to do that, and you would not be able to anyway.

DR. HOGAN: No. I think the third reason was really the—the first and the third. The first reason was just the appearance of—you would tell the people in Montana, “Don’t worry, we have North Dakota.” I think that is a very real issue. To the extent we can address those concerns and not give up precision to address them, I think it is quite legitimate.

The other was for the evaluation and further research, retaining enough sample to allow for a composite estimate.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

MR. WAKSBERG: Howard, I am not sure you are going to necessarily improve statistics for, let’s say, Wyoming or Alaska by getting more cases there than you would by getting closer to an equal-size sample so that your national estimates that you use for the post-strata, which you would apply to Wyoming and Alaska anyway—that your synthetic estimate would not be better. So what you are doing is sacrificing the synthetic estimate for what I think is an illusory attempt to get state data.

DR. HOGAN: We looked at what we were sacrificing. In simulating what would happen if we had a minimum sample size of this, this, this, this—how good the synthetic was, given the various minimum floors—we decided that by putting a minimum of 1,800 in each state, we really did not have to sacrifice very much— hardly at all. The post-strata accuracy is what we want, but our research showed we could meet that other requirement and not sacrifice very much at all. So we went down that road.

DR. NORWOOD: May I point to the CPS state estimates? It is useful to be able to tell people that there are cases in each state.

MR. WAKSBERG: CPS state estimates were designed to provide not a synthetic estimate, but, presumably, good estimates.

DR. NORWOOD: It is a different question. But if you are going to use this to evaluate where you are, it is useful to have.

MR. WAKSBERG: It is a different kind of thing. You are talking about an estimate of...

DR. NORWOOD: I know. We are talking about a different thing.

MR. WAKSBERG: So you put 1,800 in there. You may get two or three or five or eight cases of undercount. You are certainly going to do better with a synthetic estimate than an estimate based on that kind of sample.

DR. HOGAN: The other consideration, which we really have not discussed, is that when we had to make this allocation, we did not have our post-strata design. So there may have been some geographic dimensions—for example, division—that could have come in later that this protects against. But it was really the kind of issue that Janet was talking about.

MR. THOMPSON: I just want to clarify one thing. Joe, there is a great deal of interest on the panel’s part for being able to, at a minimum, evaluate the synthetic assumption after we have done the A.C.E. By putting the minimum sample in the states, it does give us the ability, after we have conducted the A.C.E. to evaluate the synthetic assumption carried down to states. A lot of people thought that was a very important thing to do.

DR. EDDY: I want to ask a question about the listing, the second listing. If I remember right, the MAF is going to have millions of extra houses that were added by the cities. I have forgotten the numbers, but....

DR. PREWITT: It is 2.3 million or fewer.

DR. EDDY: I remembered a bigger number than that.

DR. PREWITT: That is the number we got only from LUCA (Local Update of Census Addresses Program). We got some from LUCA and our own block canvassing. The 2.3 million came in only from LUCA, and those have not been checked. We do not think it will end up that high.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

DR. EDDY: I guess the question is, is that going to be used in any way to inform the listing that you are doing now?

DR. HOGAN: Not to inform the listing. Our A.C.E. listing is independent.

DR. EDDY: What I am thinking is, whatever mistake you made the first time, you are now going to go down and do it the second time, and somebody told you there is a mistake, and you are just going to ignore it?

DR. HOGAN: First, for our initial sample, we brought in the census housing unit counts as they existed in June, for our measures of size. We are now going to bring in through July. So then, when we do our sample allocation, we will have both measures of what the A.C.E. listers found and what the census now has.

DR. EDDY: How are you going to use them both?

DR. HOGAN: That is one of the issues I am putting on the table. We can simply use one or the other or both as a measure of size, in the traditional sampling situation. If we sampled it as a small block and we find it as having 500 units or the census now has 500 units, we would want to take that into account in the traditional measure of size kind of situation.

In addition, we are looking at using the differences between the number we have listed and the number the census has to indicate a problem block that is likely to have high coverage error and try to include it with lower weights.

MS. KOSTANICH: It is the December MAF we are looking at. When we actually do it, we will have the most recent MAF.

DR. EDDY: I see, okay.

DR. HOGAN: That is the kind of sampling issue that I am laying on the table today.

MR. WAKSBERG: Let me follow up on this, because Bill has a useful point here. Once you have done 2 million, it is a shame not to squeeze all the information you can out of that. In addition to doing what you are talking about here, which is sort of using it selectively, you should be able to think in terms of some kind of a double sampling scheme, to help you in your estimate of missed housing units—even something as simple as doing a computer matching check of addresses and using that as a basis of a double sampling-sample selection or estimation.

DR. HOGAN: To a certain extent—I do not think to the exact extent you are talking about—we are doing that, by bringing in the aggregate differences between what we have and what the census has in choosing our second-stage probabilities. We are using the information very much. One of the big advantages of this is, if blocks change size, we can make sure we include them.

To the extent that you are talking about doing some sort of more sophisticated ratio over the whole sample and then applying the subsample to the larger sample, that is really not part of our plans. We can give it some thought. I am not sure to what extent we can modify things to accommodate that. It is an idea that we have not pursued.

So we are not all the way to where you are suggesting, but we are very much trying to use the information. For the small blocks, we have a very explicit two-stage sample, where we list them and then differentially sample depending on what we find. If it is still zero, we do not go out on it. If it is still 1 or 2, it is a 1 in 10 sample. If it is 20 or more, we take it for the subsample.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

DR. KALTON: May I come back to the question I started with, which was why the differential rates across states? You have said that you are going to have your sample allocated that way. That does not say that you necessarily need to change the sampling fractions for the minority blocks versus the others differentially.

PARTICIPANT: On some of those numbers—I call your attention to Florida and Illinois—those ratios are very stable all the way across. We are going to take almost 100 percent of the Illinois sample because in our first listing we do not have enough; we have just barely what we need to make an ideal second-stage listing. Keeping practically all of Illinois—what 1.06 means is that we keep all the minority of Illinois and most of the non-minority. That is why it is 1.06. These numbers are actually constrained. Florida is another example, which is 1.0. I think we are keeping all of Florida. So there are some states where the first-stage sample did not seem so terribly big, compared to some other states. That is why these numbers reflect some of the constraints.

DR. KALTON: That is a good reason for those cases, clearly. The other question that I was not clear on is, is this all tied into 1990 data, as if that was how the current world is?

DR. NORWOOD: Yes, so far.

DR. HOGAN: In terms of the racial composition of a block, yes. In terms of sampling strata, we used the 1990 composition. Obviously, in terms of estimation domain, we will use whatever is reported in census 2000. The racial and ethnic composition is based on 1990. The measures of size are based on census 2000 address lists, either from the decennial address file or from the A.C.E. listing. So the measures of size are current. But we do not know if this block is Hispanic or Hawaiian or whatever, except to the extent it has been 10 years ago.

DR. SPENCER: Are the compositions expected to be fairly stable?

DR. HOGAN: The racial compositions?

DR. SPENCER: Yes.

DR. HOGAN: It depends on what you mean by stable. If you look at it between decades, clearly there are changes, but on the other hand, there is certainly a tendency for a minority block to stay minority 10 years later. So it carries a lot of information.

In 1990, we used the 1980 data, which worked pretty well, except for areas, for example, in southern California, where we had a huge influx of Asian immigrants. We did not know about that until after our sample. But most of the areas that we sampled as being predominantly African American based on 1980 still were in 1990.

When you get to some small groups, it is a little bit more problematic to control your sample size.

MR. WAKSBERG: Howard, the issue of consistency over time of classification I dealt with in a paper, where we examined classification as coming out from the health interview survey, 1980 and 1989, for identical segments. You are right; there is a fair amount of change, more for Hispanics than blacks, but for both of them. You might want to look at it, to give you some clue as to the extent of change, assuming 1990 to 1999 changes like 1980 to 1989.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

DR. HOGAN: I think we did map 1980 to 1990 to help us calibrate this, but I cannot remember the results of that. If you will give us the citation, we will go back and read that paper.

MR. WAKSBERG: Survey Methodology, about five years ago.

A.C.E. ESTIMATION ISSUES—MISSING DATA

DR. HOGAN: That is about all I really wanted to say about the sampling. The only other topic I will bring to your attention, but not discuss to any great extent, is one I mentioned this morning, the missing data. This is an area where we are just beginning to build the models. To the extent that this panel can think about the issues and give us some insights, we would certainly find it quite useful.

In our dress rehearsal, we used a very simple ratio adjustment, controlling essentially on whether it went to follow-up or not. If a case went to follow-up, it obviously went to follow-up for a reason, which is often that it could not be easily matched. So cases that did not go to follow-up were essentially adjusted—I am talking here in terms of probability of being included in the census—based on other cases that went to follow-up successfully; cases that could not go to follow-up, on all cases. But it was a ratio model. We did that because we felt constrained to do this separately by states.

The ability to share information nationally—we probably have one model we need to think about—what additional variables to bring in, and also whether we have time or it is worthwhile to go back to logistic regression, as we did in 1990.

The other issue that we are thinking about—and it is a very tough one, but this is a tough group—right now we impute the P-sample and the E-sample independently. That is, we impute the probability of being correctly included or missed in A.C.E. and the probability of being matched and not matched within the same block. To the extent we could learn from each other, we could probably build in some correlations and some balancing that would give us a better estimate. We have begun to think about that, but we have not gone very far down that road. It is a topic I would like to put in front of the group.

DR. EDDY: Which imputation are you talking about? Is this whole-person?

DR. HOGAN: This is imputing, on the P-sample side, enumerated/missed, and on the E-sample side, correctly enumerated/erroneously enumerated. This is different from the issue this morning, which is imputing race or characteristics and putting them with a post-stratum. In this case, you know it is in the same post-stratum—actually, you do not. You have a group of cases in the same block, some of which you could resolve on the E-sample side and some of which you could resolve on the P-sample side. You have two refusals, to take the easiest case. You have a household that refused, on the P-sample side, to cooperate and that refused the census. You have two sets of missing data.

If you imputed them separately, you could impute one as being correctly enumerated and the other as being erroneously enumerated, or something like that. If you can share the information in a meaningful sense, you might be able to do better.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

DR. SPENCER: I thought you did weighting adjustments for that.

DR. HOGAN: For whole-household, we do.

DR. SPENCER: So this is refusal of a person.

DR. HOGAN: For the follow-up, yes. For whole-household imputation, we do weighting. For characteristics like we talked about this morning, we do essentially [an imputation]. I am talking about a fairly nuanced issue in terms of capture probabilities.

DR. SPENCER: Right, unresolved, getting the probability of match.

DR. HOGAN: Right.

DR. SPENCER: To know how to build in a dependence, you need some data. What data do you have from 1990 to get us into this?

DR. HOGAN: Only to the extent that you have similar cases in the same block.

DR. SPENCER: Wasn’t there a subset of cases that were initially unresolved and maybe got resolved, maybe in evaluation follow-up? You could use those to try to see how much dependence there was.

DR. ZASLAVSKY: I am not sure I get, in broad terms, what the theory is under which you would expect dependence between these two different cases.

DR. HOGAN: We know in the resolved cases, for example, that very often a census miss creates a census erroneous enumeration, and vice versa. For example, the people who live there on census day have moved out and were missed. The people who moved in got counted in the census nonresponse follow-up. Thus, the very mechanism that created an omission created an erroneous enumeration, which, at some level, nets out.

DR. ZASLAVSKY: But you have address information that tells you when you get one of those cases. I see the theory when they are at the same address. That seems pretty clear, why you think there would be a relationship and what is going on. But I do not understand why two that are within the same block at different addresses would be thought of as being dependent.

DR. HOGAN: For the first time, we will have the housing-unit matching. We now have two stages of housing-unit matching. We do have that kind of information. We can think about how to bring it in.

DR. ZASLAVSKY: That is a theory I understand, but I do not understand the other one.

DR. HOGAN: That is because the theory has not been developed yet.

DR. ZASLAVSKY: Maybe this gets back to the first question, which we kind of skipped over, of whether you should be going to something more complicated than just the ratio model for the imputation. The point of doing something more than a simple ratio was that the information you have about some characteristics of the unresolved cases is helpful. The evidence for that was that in the 1990 logistic regression model the average imputed probabilities for the cases for which you had to impute was not the same as the average rate among those that were resolved, because they had characteristics that were systematically different. That would be the rationale for doing that. That is the main rationale. I guess a secondary rationale is whether you pick up anything or other information you might have about them, like match codes and things like that.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

The method that was used in 1990 was a complex model fit with by now obsolete technology, both hardware and software. I think that probably, in terms of just sheer technical difficulty of doing it, it should be a lot easier now. You could also do something somewhat simpler, but not as simple as doing a ratio adjustment.

The reason you might not see it making very much difference in the simulations in Q-3 [Cantwell, 1999] is that the unresolved cases are only a fraction of the cases, and if the difference is not huge and it only applies to this small part of the cases, then it will not make an awful lot of difference. But also part of what may be going on there is that you tried this out on the test sites and the test sites are inherently more homogeneous than the larger areas in which you might be applying this. I am not sure what the reference group would be.

DR. HOGAN: Hard as it may be to believe, I have said everything I want to say.

DR. NORWOOD: What I would like to do now is to go around the panel and our invited guests and get any comments that you have, in general or on specific things.

COMMENTS FROM PANEL MEMBERS AND GUESTS

DR. ROBERT BELL: Let me make some comments, mainly about the sample design question. Some of them apply to the post-stratification as well.

I had some things to say that Joe Waksberg got to first in terms of criteria. I think the criteria are fairly important in terms of whether or not one is interested in, say, coefficients of variation for post-strata or something else. One concern I have about post-strata is that for some of the smaller ones, I think a big CV probably does not matter that much, if it is a small post-stratum. So I think it is useful to look at CVs for some geographic areas. I think my preference would be for something like congressional districts, as opposed to states.

It is also important to try to think about how to combine these numbers. For instance, in Attachment 2 [Mule, 1999], there are a large number of coefficients of variations or differences in coefficients of variations among the different sampling plans. Some go up and some go down. The real issue is how to weigh in that some went up versus that some went down. That is a difficult issue.

Another thing about the sample design is the issue of coming up with these weights, which are associated with the oversampling or undersampling of different types of blocks. While it is true that indiscriminant variation in weights is bad, there are certainly some areas, which you have already taken into account, where making some weights smaller by oversampling is a good idea. In particular, I like the idea of taking all the blocks where there is a discrepancy in the housing counts between the A.C.E. and the MAF, and a couple of other areas where you have done that.

The other area where I think it makes sense to do that is this one of trying to oversample high-minority areas. Potentially, you might also want to try to oversample blocks with multiunit structures or where there appear to be a lot of

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

rental units. The reason is that we know that those types of areas are going to have higher undercounts, and most likely higher variances as well. Wherever you have high variances, or wherever you are likely to have outliers, in the block counts, you would like to have lower weights.

There are two reasons for oversampling high-minority areas. One is the same thing, that you are likely to have higher variances. The other is that there is interest in the minority post-strata per se, and also in areas that would tend to have a large number of minorities, so that getting the synthetic estimates more precise in those areas is important.

One idea that I think might be useful—one of the things that was in one of these memoranda was a concern that in some states there were not very many minority blocks, when you looked at 50 percent or more minority. In some other states, if you took all the ones that were 50 percent minority, you sort of used up all the sample. One way to avoid that problem might be to make some of these definitions of what a minority block is and what a minority block is not dependent upon the state—for example, it is a high-minority block if it is in the top 20 percent relative to that state, or something. In some states that would go down pretty low, and in other states, it might be pretty high. That would be another way of controlling the number of blocks that were deemed to be minority within a state.

A similar idea might be of use in the post-stratification, depending on whether you did something like using high minority status or low mail-back response rate as a post-stratification variable. Instead of setting an absolute limit, it might make sense to set a limit that was relative to that region and that geographic group— in other words, urban over 250,000 population, other urban, and non-urban. I think that is even more of an issue for the post-stratification because you really want to avoid small post-strata. If a particular area of the country in, say, the “other urban” generally had very high mail response, it might be difficult to find low-mail-response-rate areas, unless you did something in a relative sense.

I had one other thought. In Attachment 1 [Mule, 1999], there are some very large numbers, as high as 8, for instance, in Montana, meaning that minority blocks were eight times as likely to get taken as non-minority blocks. That does not bother me particularly. All that means is that those blocks are getting low weights. I really worry a lot when I see a few blocks with very high weights relative to the others. But as long as the majority of weights are not going up too high, then a few blocks—maybe it is only three blocks that would be taken in Montana—if you have three blocks that are taken with certainty and have low weights, that is not as big a problem as if you had three blocks that you took with very low probability, giving them weights that were eight times as high as everything else.

DR. NORWOOD: Larry?

DR. BROWN: I started out with sort of an assignment to comment on something you did not talk about so much. The agenda lists estimation issues, and most of those you did not really talk about in detail, which seems right in context, because, as I understand it, those do not have to be settled as soon as the things we did spend more time on.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

Some of these you did touch on or talk about, in terms of missing data, for example. How that is to be handled is an important issue. You did not tell us how you would do it, but you did tell us you are worried about that.

I want to emphasize again something that came up this morning, which is that the relation between the missing data question and the imputation question seems to me to be very important. I suspect that a lot of what we saw going on has to do with the imputation schemes. Those need to be reviewed, and their relevance to matching problems and other kinds of estimation problems. There were several other issues you mentioned. One of these has to do with search areas. I see up on the board that you were talking about this, where you have targeted extended search.

DR. HOGAN: Alan was talking about it. I was listening.

DR. BROWN: I agree that it is an important issue for estimation. I do not quite know how you are going to handle it this time. There is this interesting other Q paper [reference not clear] we got that we have not talked about. I do not quite follow how that is going to play out. All I am saying is, I need to know more.

We mentioned briefly—but it is going to have to come up again—depending on how many post-strata you choose, you may need to do some collapsing, and if you choose more post-strata, you will need to do more cross-category kinds of things. That does imply that there will be some kind of raking in the estimation analysis, especially if the collapsing is not the same—I do not know how to say this—if the collapsing is crossed with something else.

DR. HOGAN: Yes. I see raking and collapsing as being sort of two alternatives to the same issue. I am not sure where you are going with that.

DR. BROWN: I think I am using your terminology from an ASA meeting at some point, where you talked about how to handle collapsed tables by, actually, what I think was the Expectation-Maximization (EM) algorithm, but where you were going to rake, readjust, rake, readjust, and come up with an estimate, which I think is the same as the log-linear logistic regression estimate.

DR. HOGAN: I will have to review that paper, because I cannot remember it.

DR. BROWN: Anyhow, if you collapse, you need to do something in the estimation that is not as straightforward as a simple ratio estimate.

DR. HOGAN: For example, if we decide that we cannot support seven age/sex groups and we collapse to one, then we would just use the one.

DR. BROWN: No, no. If you decide that there are not enough Asian/Pacific Islanders in the Midwest and the South, and you collapse that category in the Midwest and the South into one category that crosses Midwest and South, then you are just going to...

DR. HOGAN: In our 1990 approach, we just have one factor—we did this for, say, rural African-Americans outside of the South—we had one factor that applied to the Northeast, Midwest, and West. Actually, we had one national one, because there was nothing in that whole thing.

DR. ZASLAVSKY: So, Larry, is your point that in that situation, unless you do something more complicated, your direct estimates for the Midwest and the South are no longer preserved?

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

DR. BROWN: I think so, yes. There are things you could do that are different from what you did, but maybe what you did is more consistent.

There is an issue that has bothered me. Probably there is not much to do with it. The way I understand it, institutional populations are not in the A.C.E., and so undercounts and overcounts of those populations or people who are partly in those populations and partly elsewhere are less likely to be caught. I am particularly worried about student populations.

DR. HOGAN: Focusing specifically on students, we have no correction factor for the student population. It is assumed to be what it is. If a student was counted at home and should not have been, then that is an erroneous enumeration and would lead to reducing down the DSE for the home post-strata.

DR. BROWN: You do not adjust the student population, so the student population stands. At the moment, are there any special estimation plans for other groups like the non-usual residents? The ratio estimator that you planned originally— that original plan has been dropped, I understand.

DR. HOGAN: For the service-based enumeration?

DR. BROWN: Right.

DR. HOGAN: It has been dropped for the apportionment counts. We still have not made a final decision on whether we can bring that in for later counts.

DR. BROWN: So the answer is, the question still remains.

I have one more point that I wanted to raise. It kind of bothers me, from a purist’s point of view. It may only be a purist’s point of view, but maybe there is something to it. In principle and in practice, two address files are gathered on the post-enumeration strata districts, there is a P address file and an E file. Those should be independent. From a purist’s point of view, the whole issue should be double-blinded or triple-blinded. Nobody on the E side should know what the P side said, nobody on the P side should know what the E side said. Someone is no longer blind to that—namely, you. When you talk about having both address files in hand to help decide block reduction, that means somebody is doing something that is integrating the two files or looking at them in comparison.

I just want to make very certain that that does not spread outside of your office and in any way contaminate what goes on in the field.

DR. HOGAN: That particular issue I am very comfortable with. Although the differences determine the probabilities, they are still known probabilities. I do not think that compromises independence. But every time someone goes out to the field and matches and makes a decision—is this a response case or not a response case?—we, to the extent humanly and inhumanly possible, make sure that he/she makes those decisions completely independently of what showed up in the other system, making sure he makes those decisions before he looks at the other system. So it is a continuing design issue that we drum in at every level of design.

In terms of the sample selection, that is the one I am most comfortable with.

DR. BROWN: You would not want it to get out that most or a heavy preponderance of the P-sample is hard-to-count areas, because that would help push your people to....

DR. HOGAN: You are absolutely right.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

DR. NORWOOD: If it makes you feel any better, when we went to Sacramento, they told us about the lengths that they went to not to have any contact of any kind, or even be seen anywhere near the same building, with these other people. They said they were made by the Census Bureau to feel almost criminal if they even said hello to somebody who was on the other staff. They were so imbued with this fact that they had to keep completely separate. So I think you have done your job on that one.

DR. BROWN: I think so. I just want to make sure the job keeps getting done, because now the boss is doing something that all of the underlings are not supposed to do.

DR. HOGAN: One quick comment on the extended search and the search area. That is an important issue. I think that may fall in better when we do the DSE piece.

DR. BROWN: Yes.

MR. WAKSBERG: Let me raise a question on that. It seems to me that the issue is not quite as simple as it sounds. If you want to get the best estimate of net undercount, there is no question in my mind about independence. If you want to talk in terms of undercoverage and overcoverage, to the extent that there is confusion between the two, I am not sure that independence is the best thing to do, as compared to having a system that tries to do some reconciliation. Certainly, the gross figures are going to be looked at very carefully.

DR. HOGAN: We may just disagree on that. I think that the way to get even the gross figures is to make sure that the people who are doing the P-sample do not know what is in the census. To me, the whole issue is getting those ratios. The best way to get that first ratio, enumerated to non-enumerated, is to make sure they do not add in a bunch of enumerated people because they met their friends at the Burger King.

DR. SPENCER: Since the net is the difference between the two grosses, and if you need independence for the net, how can you get by without it for the grosses?

MR. WAKSBERG: Take the case of imputation, for example. You are putting people in who may exist and have been interviewed in the neighboring household. It would be nice to know that and not treat it as a plus and a minus, but as no effect on the census.

DR. HOGAN: But isn’t that just the search-area issue? If I understand what you are saying, that is just how far from where you think they should be counted to how far away from the area they could have been counted when you would call them correctly counted. But you can do that and maintain strict independence.

MR. WAKSBERG: Can you do it as well?

DR. HOGAN: I think you can probably do it better. I cannot think of any reason why you cannot do it as well.

DR. ZASLAVSKY: Howard, my understanding of the Bureau’s position on this has always been that you are not really trying to report particularly meaningful figures about gross errors. If two errors balance, you do not care whether they balance because they are an electron and a positron that got created that do not represent anything but are just these two opposite things that appeared in the

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

files—if they are two blocks apart, or something like that—or whether they are really two independent errors. You just subtract them at the end. So the only number you really stand on is net error, and the gross errors depend a lot on the particularities of how you do processing and the estimation, and are not necessarily really descriptive of meaningful characterization of errors in the census.

DR. HOGAN: That is marginally true, but not entirely. Certainly our primary focus is explicitly estimating the net undercount as well as we can. However, after the census, we take the gross files and that is part of the information we use to figure out how to do the next census better. We have made uses of the 1990 gross files in planning census 2000.

MR. THOMPSON: Joe does have a little bit of a point. If our focus was more on just estimating the number of erroneous enumerations, we would probably do it a little bit differently. But our focus is on the net. In doing that, we make some tradeoffs that sort of put a little bit of noise into the estimate. We still have a good measure, but it just has a little noise in it. I think that is what Joe is saying. If our focus was on one of the components or the other, we might do it a little bit differently and give up on some of the independence. But since our focus is on the net for the undercount. ..

DR. HOGAN: That is true. There are some classes of cases we treat as out of the census, because it gives us a cleaner set of independence. But if your sole focus was on census processes, you might include them and study them and make judgments about them. I see your point.

DR. LITTLE: I had a little bit of a reaction to the comparison of models that we were talking about earlier. I think it is a good idea, but you have to be a little bit careful when you are looking at a set of models, some of which are better than other models, and there is no real context there. You have models where you are throwing out variables where the evidence suggests that that variable is a good predictor of undercount, but you are excluding that variable in that particular model. There may still be some value in doing that, but I think you have to be a little bit careful, because people will tend to look at the variation across those models without taking into account that, from my Bayesian perspective, some of those models have posterior probability zero, and therefore should be excluded from consideration.

That is not to say you should not do it, but I think you have to be a little bit careful about making clear that issue.

I think I sort of said my piece this morning, so I do not really want to go on about it. But basically there is the logistic regression type of way of combining information for predicting undercounts, and then there is this cross-classification idea, where you include all high-order interactions in the regression model. The thinking seems to be that those two are kind of distinct, but I think, if you creatively define the post-stratifier to be linear combinations of variables rather than just variables, then there is no reason why you cannot define a linear combination that is very highly predictive of the thing, and then include that as a post-stratifier in the models.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

That is basically what I am suggesting. I do not think it is any harder to do than what you are doing now, and I think that might be a way of bringing in some information that you are currently excluding because you are restricted in terms of the number of variables you have.

The other comment is—again, people have said this before—on the imputation issue, where you are imputing post-stratifying information. You are currently using a rather crude approach that is ignoring the matching information. It is ignoring the multivariate nature of the problem, as far as I can tell. If it is a minor issue, then it is probably okay, but if it is not a minor issue, then you should be doing a better job on the imputation. I am sure there are better ways of doing it.

DR. NORWOOD: Bruce?

DR. SPENCER: I want to talk about correlation bias, which affects the estimate. I guess this is an appropriate thing to talk about.

In 1990, the dual-systems estimator was more accurate than the census, in the analyses I did, but the accuracy would have been better had there not been correlation bias. Correlation bias arises because individuals within a post-stratum do not have the same probability of coverage in the census, and these probabilities are rather similar between the P-sample and the E-sample, or the census.

There is information about correlation bias from demographic analysis, but only nationally. That has prevented the adjustments from being imposed at the sub-national level, and therefore at the national level.

I have two suggestions, which, consistent with all my other suggestions today, are not going to be useful to you right now. But they might be useful for evaluation of the 2000 census, and they might be useful for 2010. The first suggestion is a way to do a modified post-stratification that will reduce the downward bias from variability within post-strata. The second is a new method to evaluate correlation bias at the local level. The first method just uses whatever data you have in the P-sample. The second method actually involves additional data collection. Let me talk about the first method first.

It is basically a way of doing post-stratification where you are using covariates that are only available on the P-sample side. Let us take the case of PES-C [the mover treatment in the A.C.E.; see National Research Council, 2001:Ch.6], where you treat movers and non-movers carefully because you know that there is a difference in their capture probabilities. But you lump them all in the same post-stratum, so you are not accounting for that difference; you are averaging.

If you think about the adjustment factor, you can view it as a DSE, where you have the E-sample count, the P-sample count, multiplied, divided by the match rate. You would like to do this separately for movers and non-movers. For the P-sample count and the matches, you have that separately for movers and non-movers. What you would like to have is the E-sample count separately for movers and non-movers. You can estimate how many movers there were in your E-sample count and how many non-movers in the E-sample count by using your information from the P-sample. You can estimate that and come up with a separate dual-systems estimator for movers and a separate one for non-movers. Then you add them together. Then you use that for defining your adjustment factor. That will

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

increase your adjustment factors and get rid of this part of the bias from variability in the capture probabilities. It can be used for movers/non-movers. It can be used for any other covariate that you could measure in A.C.E. that you cannot measure on the short form. That opens up a lot of potential. So that is one thing.

The second has to do with using some lessons that the Bureau has learned from the Living Situation Survey, the LSS. There is a paper by Betsy Martin in the summer Public Opinion Quarterly, where she talks about how the LSS was designed to probe more deeply than the traditional census interview or P-sample interview, and really include on the roster many of the people who are missed in the census—people with transient relationships, for example. There is a lot of probing.

If these people are missed by the census and tend to be missed by the A.C.E. as well, they are going to give rise to correlation bias. If you could modify the A.C.E. interview, you might pull in more of these and come up with an improved dual-systems estimator.

I am not suggesting that you modify the A.C.E. interview. I understand that the evaluation program, is already set. What would be interesting would be, since you have some extra cases in the A.C.E. sample that you are subsampling anyway, to treat these as a separate evaluation sample. I will call it A.C.E.-Star. In this, you use the LSS methods for the A.C.E. interview and see whether you pull together additional people. If you could embed your traditional personal interview within the LSS, then you could see how many people you are adding, household by household. Even so, you are doing this on a probability sample of blocks, so you can still compare the results between the A.C.E.-Star and the usual A.C.E. interview.

There are some other details of this, but if you could do this, you would then have a means of seeing whether you could estimate correlation bias at the block-by-block level. That would then provide a means for testing models that Bill Bell has explored for bringing the national estimates of sex ratios down to the local level and would provide some direct evidence also for block-level estimates that you could use for evaluating the accuracy of the census and the DSE at the block level, by coming up with really good counts, at the block level, of whom you found. I am sure there are statistical problems with it, but it is a suggestion.

DR. NORWOOD: Thank you. Alan?

DR. ZASLAVSKY: I will just say a couple of things about estimation because I think I have said what I had to say about the other topics as we went through them. In terms of the unresolved cases, I guess the main message is, try to be as conditional as possible when you do the imputation for them or the estimation of the probabilities for them, which means, for the individuals within households, doing some modeling that will allow you to see whether they have different characteristics that predict different average omission rates.

I think, also, for the household non-interview weighting—you mentioned this and sort of passed over it in about two sentences in the memorandum on that topic—probably a lot more could be said about that in terms of how you define weighting classes. We know that there are household characteristics, structural characteristics of households, which are quite predictive of whether a household is

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

correctly enumerated or erroneously enumerated. This is different from the issue this morning, which is imputing race or other characteristics. You have a group of cases in the same block, some of which you could resolve on the E-sample side and some of which you could measure whether they get enumerated or not—I am thinking of size of households and things like that—which are some of the things that you considered in your post-stratification. But since we know that a lot of these things that are predictive will not make it into the post-stratification, you could use those in forming the weighting classes for the non-interview adjustment and get better estimates. Again, if the non-interviews are related to some of the same characteristics that are related to census omission or enumeration, then you will get more accurate estimates that way.

Those are the points I wanted to make about that topic. The other topic we did not get to was the extended search, which I think is a great idea. I have some detailed suggestions about it, which I do not think there is any point in going over here. But it is clear that not doing the extended search not only increases the low-level variance from the individual households, but also it puts you in the situation you were in in the last census, where you have these huge omissions that contribute a half a million people to the undercount, and you know that it is wrong and you just cannot do anything about it except on an ad hoc basis.

If anything, you might want to go a little further with the extended search idea and have maybe another stratum of really interesting cases, for which you would be willing to go really far to find them. I suspect, if a case is really, really interesting, there is a lot of information there. We are talking about cases where 500 people are outside the search area. There is probably enough information among 500 people to figure out where they really live. So if you can extend that idea a little bit further, to figure out what rules would have picked up some of the worst cases in 1990, you may save yourselves some real problems down the road.

DR. NORWOOD: Thank you. Joe?

MR. WAKSBERG: I am not going to repeat some of the comments I made earlier. Let me pick up a few additional things.

First of all, Alan left off of the extended search that you had included a factor of 1.56 of the variance for not doing the full extent of the search. It seems like a big price to pay.

In addition, if I understood this memo we got yesterday, there seems to be a bias. You say that simulating the effect of limiting the search areas to block clusters found that the direct DSE of the total population was 1.5 percent higher than the 1990 DSE. If I understand this, you are understating your estimate of undercoverage to the extent of 1.5 percent. That is probably half of the total of the undercoverage.

Did I misunderstand something?

DR. HOGAN: I will not say you misunderstood. I think this is one of the drawbacks of doing a lot of our research on 1990. Our research on 1990 has taught us a lot about how to design targeted extended searches and variance properties. But the 1990 data are very limited in terms of what we can infer about the bias properties of our models. I think some of the results that we discuss there—even though that is a very recent memo, we continue to work on that very issue.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

There are a number of things about 1990 that do not carry over to 2000, one of which is that we move from the PES-B treatment of movers to the PES-C, and that has some implications about the relative bias of the extended search. In addition—this goes back to one of the 1990 evaluation studies, one of the so-called P studies—they looked on the E-sample side at how well they coded, whether it was correct because it was in the block or correct because it was in a surrounding block, and found out that, since that was not really very important to the 1990 PES design, it was not really done very accurately.

So I think some of the stuff in that memo, in terms of directly quantifying the bias of extended search or not-extended search, has to be taken with more than a grain of salt. I think what we learned there in terms of the variance properties, and what we learned there in terms of some of the theoretical issues that we had to think through in building a model, was very important. But there is no reason I can think of that the kinds of models we are dealing with would likely cause this kind of bias. I have not been able to think of a reason, except that the data set from 1990 is limited because of the way we treated movers, and, finally, the coding.

We need to continue to think about what we can infer from the 1990 data about probable 2000 biases.

MR. WAKSBERG: I want to echo a point that the other Joe [Sedransk] made before. If you want to talk about regions, think of defining regions differently. For example, it does not make sense to me to include California with Washington, Oregon, Alaska, as compared to including it with, say, Texas, with a high Hispanic population. Certainly a plausible region might be the southwestern states that have high Hispanic populations. You can think of other, similar situations.

DR. SPENCER: They could use 1990 estimates of coverage rates for defining a new kind of region.

MR. WAKSBERG: That is another way of doing it, yes.

The classifications that you have for minorities—black and Hispanic, under 10 percent, over 25 percent—just having two categories seems skimpy to me. Maybe for post-stratification, you are more constrained by the number of cells, but for 5sampling you certainly do not need to think in terms of very gross classifications. You can have much finer ones. You can select systematic samples at the same rate or at different rates.

Bruce earlier sent me a little note about demographic analysis. I echo some of the other comments made. You should explore more uses of demographic analysis than simply thinking in terms of sex ratios, black and white. I cannot be any more explicit because I do not know what I am talking about, but I just think it is something that should be considered.

For example, you are going to use sex ratios based on black females. What happens if the black female count in the census or the undercount estimate differs seriously from the demographic estimate? For blacks, the demographic estimate should be very good, and probably for the non-Hispanic white and other.

DR. NORWOOD: Thank you. Graham?

DR. KALTON: Until this meeting, I was an interested spectator of all of this. I knew it was complex. Now I really know it is complex. So I sort of feel like a learner in a lot of this.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

But the kinds of suggestions that I have drawn out of the meeting are:

First of all, we spent quite a bit of time talking about the post-strata and the one sort of model, which is to take the cross-classification of all the variables and do a little bit of collapsing down when you have to, to the logistic regression models where you can put in as many terms as you would like, including interactions. I feel that the cross-classification is too rigid. I basically favor the post-stratification approach of doing this, ending up with some cells that you operate on. But some sort of more flexible approach to that—I had not thought of the idea that Rod brought up of a sort of propensity score/linear combination approach for putting some of these variables together. But that is certainly one way of going about it. The other way, which I have thought about, is the kind of way that some of you may know as automatic interaction detection, when you split up cells and you do things differently in different subgroups; as you keep splitting, it goes in different ways.

Those kinds of ideas of a more flexible determination of the cells, I think, are worth thinking about.

It is not clear to me, if you go these kinds of routes, how important some of the later variables in this whole process are. I think the work certainly should be looking at what the consequences are of adding in extra variables, in terms of what they do to variances and what they do to try to improve the adjustments.

So that is one area.

A second area that we talked about this morning was this issue that is related to all of that, which is the comparability of the A.C.E. data and the census responses, and the inconsistencies there. One possible explanation of that, which Rod commented on a couple of minutes ago, is imputation. That is certainly, I think, something you could separate out and have a look and see what that is. If that seems to be a really important thing, you may be able to find ways to improve that imputation. I am not sure if you will, but I think it would push you to look at that question.

I raised a question this morning. I did not quite understand how it was answered. I decided I did not want to delay everybody else on this one. But it still seems to me that the key issue is, do you have systematic error in this—let’s say the household composition variables? I do not know. I still think that I am not terribly worried about a random error issue to that. So that may not concern me. I think I want to concentrate on that particular systematic bias aspect of it.

With regard to the sample design, first of all, there is the point Joe Waksberg made earlier, which is that you are in a unique situation of having the very large sample, 750,000, from which you are subsampling. That is a natural image of a two-phase design. The question is, how do you use that most effectively? You can use it in design, you can use it in analysis, or you can use it in both. One of the things you can do is oversample some groups rather than others, or you can do ratio adjustments or whatever. Obviously, you are exploring some of those things, and that is an important thing.

The oversampling by race seems reasonable, but remembering that, as is noted in the papers, the race data are 10 years out of date. Therefore, you do not want to go as far as might be suggested by the optimum kinds of formulae.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

There were a number of questions I had left that we did not have time to cover. I was not sure how you were going to do the oversampling when you found that the measures of size differed markedly, what sorts of operational procedures would be applied for that. There were a variety of other things that we did not have time for. I do not know what constraints you are under, but with the large blocks you segment them and you take a segment. The natural way to do it would be to just take a systematic sample through the whole of that. But there is presumably a reason why you do it this way. I would have asked the question and we would have discussed it, and you would have probably explained to me why what you do is right.

There are the boundaries to those segments that create the question of what you do about this extended search. Crossing those boundaries might be much more significant than crossing boundaries across “other.” Do you go in the whole of the block?

DR. HOGAN: Yes.

DR. KALTON: You do, okay. I was not clear on that. Then the relationship to the E-sample—I again was not clear on how that worked out. So I had a number of uncertainties about sample design, which we did not have time to go over.

DR. NORWOOD:Joe?

DR. SEDRANSK: These are all rather broad things. There are several uses of post-stratification. Reading through the documentation, it was never clear to me what importance various pieces of these have. I do not think you need to clarify them for us. The workshop is ending at 5 o’clock and it might help you to articulate as clearly as possible these alternative uses of post-stratification, some of which are obviously contradictory.

I am sure that is not a simple task. I am sure it is in the back of your mind, Howard, and is in the back of everybody’s mind. But if you can articulate it, you may get a better solution.

By the way, these are all comments about post-stratification.

Another thing that I am sure you are doing—but it has not gotten that far— the issue is, what are the additional gains from using, for example, another post-stratification variable or substituting one for another? In other words, looking at variances by themselves or mean squared errors does not seem to be the whole answer. Is it worthwhile adding another variable What sort of reduction do you get?

That is the second thing. The third thing, which is much more substantive, is testing models. At the stage you are now doing, you have some candidate models you are testing. Suggestions were made here about checking them against domains which were not used in the post-stratification. One suggestion was geography— states, large cities. Then Graham had a very good idea: how about something like growth areas, something that is not connected with the usual thing? Then I thought, as I was getting up for the break, what about surprises—that is, things you could not think about in the first place? My suggestion about that would be some kind of cross-validation—just drop out some observations and see how well you predict. I do not know if this is particularly useful. I am just thinking, is there

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

a factor or a type of factor that you are not capturing and you do not know about? Graham thought of growth. I had not thought of growth, but there may be some others like it that we do not know, and the only way I know to do it is to drop out some observations.

Another thing that I would never have thought of before, except in predicting mortality rates for chronic obstructive pulmonary disease—you are using kind of a random-effects model, and random effects for areas were all very small. I thought, does this make any difference? It is kind of a small-area analysis. It was very revealing to drop them out of the model and see what would happen. It turned out that the model just did terribly without these little effects in them.

The reason I am saying that in this context is—this is, again, a matter of time—if tenure is an issue, one of the variables that you are thinking of including, I might be more convinced about tenure if you dropped it out of the model and saw how well the model performed. If it is really good, you ought to see a real decline in performance by knocking it out.

So that is the idea—even something that is sort of obvious, to see how it goes.

Two more general things. In some of the modeling exercises, it seemed to me there were rather strong assumptions made—the independence assumption. I do not know if you can relax it, but I am suggesting that, if there is a key analysis that depends on some assumptions, you still try to check the sensitivity to it.

The very last thing is—since I might be the only person here from the 2010 panel—it seems to me that using 1990 data mostly (although not completely)—it would be really good to analyze after the census, if you had the 2000 PES rather than the 1990, would you have drawn very different conclusions? If the answer to this is yes, and then in 2010 you come up to this—2010 is projected to be very different from 2000—maybe you should not be spending this much time using the 2000 census.

DR. NORWOOD: Thank you. Bruce?

MR. PETRIE: I do not think I could add anything of value to the technical discussion on the various aspects of A.C.E. But based on my reading of the documents and the discussion that we have had here today, I did form an impression or two of the program. It really boils down to the issue of complexity. Notwithstanding Howard’s and his colleagues’ attempts and assurances earlier that efforts are made, where possible, to keep things simple and robust, the fact of the matter is that this is a complex initiative. That has implications from a couple of points of view.

One is in terms of explaining to the public just what is going on here, how the second set of real census results was produced. It is not going to be easy to understand. There is room for legitimate differences of opinion among experts about the choices that are being made, the decisions that have been made, that will be made, and a debate about whether some of the decisions were the best ones, or indeed even proper ones. So it is going to be a difficult program to explain when the census results are released, particularly if the results are considerably at odds, in some areas, with the census counts. So that is one aspect of the complexity that would be of concern.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

The second is that there still are, as I read it and as I listen, a fair number of decisions that have to be made and a fair bit of analysis and work that has to be done before the program is in the field and before the analysis can take place. Those various steps that are yet to be taken simply will never be tested in an integrated way. It was not possible in the dress rehearsal. There is not going to be an occasion to do this. The bottom line is that there will have to be a combination of good luck and good management to ensure that the outstanding issues and decisions that have to be taken and the work that has to be done are, in fact, completed properly and can be implemented. The schedule just does not leave much room for any significant second-guessing or rethinking of the plan.

So there is an operational challenge here that I think is quite substantial. I know that the folks at the Bureau appreciate that and are keeping it in mind in the decision process. It is one that I certainly would emphasize, as somebody who used to be concerned about running censuses. It is a major concern that I would have with this set of proposals that is on the table.

That is generally it.

DR. NORWOOD:Norman?

DR. BRADBURN: I do not have a lot to add, except that I would like to stress a kind of perspective of thinking about these post-stratification issues, which I think has been implicit, but I would just like to make more explicit. It looks to me as if everything we have been talking about is the kinds of variables that are associated with errors in the census, and you concentrate on trying to pick post-strata that reflect that. I would say that the perspective would be to think about what we know about the processes that actually produce errors in the census. Many of those [variables] that we use are kind of proxies for it, and they may be good or bad.

There are two kinds of things we talk about. We talk about unit errors, where households are missed, and then where individuals are missed. What is associated with missing units? The big one had always been the address list. You have done a lot to improve that. But it seems to me—and we have talked a little bit about this—that places where there are big mismatches or errors between the initial address lists and updates, or various kinds of things, might be areas that you want to concentrate on. The mail-back rate is very appealing to me, because it seems to me that that captures a lot of what the problems are—even though I was not quite sure from these kinds of models whether it looked as if it really did do things. But it seems to capture so much about what we know about the difficulties.

There are some others that were mentioned—areas where there are a lot of multiunit structures. We know that those are problems. But that may be captured in the mail-back rate.

On the individual coverage problems, one thing that we had not talked about that struck me might be sort of useful is the number of forms that are returned with high missing data. Again, that would indicate that these are areas where there are a lot of problems.

Ken mentioned thinking through—and this, I think, we have not given as much thought to as we should—what the changes are this time compared to 1990,

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

since we have been using, so much, the analysis of 1990. I think this reflects, probably, all of our sort of thinking: do not count your improvements before they are proved.

On the other hand, we should look at the other side of it, because there are some changes that I think are going to make, at one level at least, the gross error rate worse. That we have not talked about. As you know, probably, I am the only person—certainly on the panel, but probably the only person in the world—who worries about overcounting rather than undercounting. If we are not going to make it this time, I think by 2010 we will be thinking about net overcounts rather than net undercounts.

But the big thing that is happening this time that I think worries a lot of us on the operational side is—not to put it too pejoratively—the loss of control over the forms. There are going to be a lot of forms. I think we heard a number yesterday, 425 million or something being printed for 125 million households. That suggests that there are three or four forms per household that are going to be....

MR. THOMPSON: Let me comment on that. The biggest number of forms are forms that we print for our nonresponse follow-up enumerators. That is based on past experience. We only take one form per household. But enumerators tend to quit, and when they quit they usually walk off with a bunch of forms. Instead of trying to track them down and get the forms back, we send somebody else out with a new batch of forms.

DR. BRADBURN: Okay, but there are going to be a lot of forms left around, presumably, for the “Be Counted”. One of the big things is that you want to have a lot of ways for people to report their census information, other than the one that is mailed to the house. We have talked in other meetings about the fact that those do not have the printed labels, so there are ways of distinguishing them and so forth. However, the gross effect is that there are going to be a lot more matching problems, and there is going to be a lot more chance for two or more forms. As I always say, coming from Chicago, we know about multiple voting and other kinds of things, so we always worry about these things.

So I think that is something that one needs to look at. If you know by the time you are doing post-stratification something like how many duplicate forms or non-standard forms came in, in an area, that might be something you want to look at as a kind of post-stratification.

Rod mentioned the matching kind of problem. This is a very, I think, important problem. Again, if there is some way in which you can get some probability of accurate matching of data into the post-stratification, or at least in the estimates, that would be something that I think you would want to give a lot of serious attention to. A lot of these depend on being able to have some kind of information that comes from the actual operation of the census, in time to be useful for the A.C.E.

I think we have been concentrating a little bit too much on things we thought would reduce undercoverage. I think we ought to think about things that might increase gross errors and take that into consideration.

DR. NORWOOD: Bill?

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

DR. EDDY: I have the great advantage, or disadvantage, of being very near the end here. Everybody has already said all of the things that I wanted to say. I just have one small thing I wanted to say, which is to echo this notion of using other methods than logistic regression. We can name these methods. I think in this situation they are clearly going to be superior. I do not think there is any doubt that they are going to be superior to the regression method. They have the great advantage that you do not have to decide what size city makes it urban or what the right number of categories is for your urbanicity measure. When you are done, you have let the data make those definitions for you. If it turns out that 149,000 is the right-sized city, then 149,000 is the right-sized city. So I just want to really pound on that.

DR. NORWOOD: Ken, would you like to make any comments about the day?

DR. PREWITT: I do not, other than, obviously, to express appreciation. I would say one general thing. It partly builds from what Norman just said. We actually do believe we have a more robust operational census for the basic enumeration than we had in 1990, by some measurable amount. I cannot measure it, but we are really confident that the promotional stuff is really catching on, the paid advertising is quality, the Census in the Schools Program is certainly catching. Complete-count committees are now out there, in the neighborhood of 7,000 or 8,000 and growing every day, et cetera. You have begun to see a little bit in the press already, but you will see a lot more of it. There is a lot of individual initiative being taken by lots and lots of groups.

That creates for the Census Bureau a particular kind of problem. We really are trying to share the operation, if you will, or the ownership of the census, with, “the public.” That creates all kinds of problems of quality control, of balancing pressures on our regional directors, our local offices. We have people making demands on us, other than ourselves. That also feeds into Norman’s concern about certain kinds of overcounting, pockets of overcount, where you get a whole lot of mobilization in a community.

Nevertheless, setting aside that dimension of it, I do think that we have a strong operational system. We are extremely pleased that LUCA came and went on schedule. That was a big test for us. In fact, I would go back to Bruce’s point, a quite important point. This is probably the most complicated census we have ever fielded that has never been tested. That is a result, of course, of the way the Supreme Court ruling happened. None of the field tests, none of the dress-rehearsal sites were run the way we are now about to run the census, with a 12-month frame instead of a 9-month frame, trying to get the adjusted numbers for the redistricting data, and so forth.

So there is, I think, a kind of complicated operational anxiety that is simply associated with the fact that we have not run the whole system through any kind of field test. That sits there.

On the one hand, it is more robust; on the other hand, it is not tested. On the one hand, you have more public engagement; on the other hand, that creates other kinds of complicated operational challenges for us. How all that is going to balance out is extremely difficult to know. There are now serious people—and I

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

would invite anyone in this room to join in this—laying bets on what the response rate is going to be. There are serious people who are now willing to bet we are going to do better than our 61 percent target, and other serious people who say the demography is running against it.

Anyway, all of that is to say that the basic enumeration census, barring some sort of unforeseen this, that, or the other—on the budget front or the political front or the PR front or natural disasters—we do feel reasonably confident about. However, at the end of the day, we do not completely count our way out of the undercount problem. If we thought we could, we would simply go try to count our way out of it. We actually do not believe we can count our way out of the differential undercount problem. Therefore, we are extremely pleased that we got an A.C.E.

I think, at the end of the day, if all goes reasonably according to the current plan and design, we may, for the first time since the real discovery of and the beginning of early work on differential undercount issues, be able to tell the country, based on data, how far you can get trying to count your way out of the undercount problem, and therefore how much you need an A.C.E. to—however the data are used—at least to know, at the end of the day, how well you did. So getting A.C.E. right, technically and operationally, is extremely critical, just in terms of giving the country an answer to what has been the albatross around the decennial census now for a half-century, in some respects. That is why the importance of this meeting and the other one on dual-systems estimation is so critical. We are kind of in the position, by the funny confluence of political this, that, and the other—in which we have a good budget, a good operational plan on enumeration, and yet the capacity to do an A.C.E. largely according to our statistical design, which is 300,000 cases of the 750,000, which is not trying to make state estimates, which is a 12-month frame instead of a 9-month frame—all kinds of properties of the A.C.E. are closer to what the Bureau would have wanted if you had asked us 8 or 9 years ago what the ideal way to do an A.C.E. is. We are closer to it than you might imagine.

So at least we have the capacity to say something fairly serious to the country when this is all over about what kind of basic decennial census you ought to be running.

That is why your help in making sure the A.C.E. is as close as possible to a strong and defensible design—to say nothing of the importance of the National Academy in general and this committee in particular, helping to prove the case that it is a very transparent thing. We are not hiding anything. Here are all of our problems; here is where our current thinking is; here are all the papers. We constantly want to keep using whatever mechanisms we have to create, we hope, a level of political confidence that this is a transparent, open set of decisions, and we will pre-specify as much as we can, so that nobody will think we are down in the basement fiddling with this, that, or the other thing next spring.

Anyway, that is why we think this meeting is so very important.

DR. NORWOOD: Thank you. Andy, do you have anything to tell us?

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×

DR. WHITE: I would like to draw a simple analogy. It is really easy to explain to people outside of this room how a go-cart works. You can show them a frame, an engine, and a chain and say, this is how it works. We all know it is very hard to explain to people outside this room, and inside this room probably, how a modern automobile system works. Just look under that hood, guys; it is tough. That does not mean that the modern automobile does not work.

I also feel that those of us who have varying degrees of understanding of what has gone on today in this room might want to compare it to sitting in on a design session of technical engineers for a new automotive engine. I think a lot would have been said that was very complicated that not everyone could take out of the room and explain to somebody else. But you wait for the results: does the engine work?

What counts, I think, is not the complexity per se; it is how well controlled the complexity is, how well thought-out it is, how well it is executed, and what the result is. I kind of hope that the complexity we witnessed today ends up giving us a Cadillac and not a go-cart. It is hard to explain some of this stuff.

DR. NORWOOD: We have a couple of minutes. If anyone in the audience has anything to say, we will entertain an opportunity for you to do that now.

[No response]

Let me say that I think this has been a good day. It has not surprised me that it has been complex. What I have been extremely pleased with is the cooperation we have had from the Census Bureau. Even more than that—because the Census Bureau has always been cooperative—what has been unusual is the production of papers, even internal papers. I again want to compliment the staff. When you think about all the criticisms that a lot of people in the press and otherwise make of people who work for government agencies, it is quite clear that there are very few issues that have not been thought of by the people inside the Census Bureau and in which they have not done very high-quality work. That does not mean they have all the answers. I do not think any statistical agency ever does or ever will have all the answers. But I do want to commend you all for the efforts that you are making.

Having said that, Ken, I really do not envy you for having to explain all of this in very simple terms. But I think it can be done, and if anybody can do it, I think you can.

I want to thank you all for coming and adjourn the meeting.

Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 1
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 2
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 3
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 4
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 5
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 6
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 7
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 8
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 9
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 10
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 11
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 12
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 13
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 14
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 15
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 16
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 17
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 18
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 19
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 20
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 21
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 22
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 23
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 24
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 25
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 26
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 27
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 28
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 29
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 30
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 31
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 32
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 33
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 34
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 35
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 36
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 37
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 38
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 39
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 40
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 41
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 42
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 43
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 44
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 45
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 46
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 47
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 48
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 49
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 50
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 51
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 52
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 53
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 54
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 55
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 56
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 57
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 58
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 59
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 60
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 61
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 62
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 63
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 64
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 65
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 66
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 67
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 68
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 69
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 70
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 71
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 72
Suggested Citation:"Proceedings." National Research Council. 2001. Proceedings, First Workshop: Panel to Review the 2000 Census. Washington, DC: The National Academies Press. doi: 10.17226/10278.
×
Page 73
Next: References »
Proceedings, First Workshop: Panel to Review the 2000 Census Get This Book
×
 Proceedings, First Workshop: Panel to Review the 2000 Census
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!