Like most large data systems, SESTAT serves multiple purposes, and no single design can be ideal in all respects. Consequently, the search for the “best” design must balance tradeoffs among multiple objectives. In our evaluation of the three design options for the 2000 decade SESTAT, the committee considered several criteria, adopted from the committee’s collective experience with design and evaluation of surveys. As important as deciding on the criteria to consider is determining which should receive priority. Our three primary evaluation criteria relate to the quality of the data going forward:
How well will the design cover the complete population of interest?
How well will the design achieve and maintain an adequate response rate?
Will sample sizes permit sampling precision that is adequate for the principal uses of the survey data?
Given concerns about potential biases in the current sample, criteria 1 and 2 take the highest priority.
Our other evaluation criteria touch on the usefulness of the future (and past) data:
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 20
4 Evaluation of the Design Options EVALUATION CRITERIA Like most large data systems, SESTAT serves multiple purposes, and no single design can be ideal in all respects. Consequently, the search for the “best” design must balance tradeoffs among multiple objectives. In our evaluation of the three design options for the 2000 decade SESTAT, the committee considered several criteria, adopted from the committee’s collective experience with design and evaluation of surveys. As important as deciding on the criteria to consider is determining which should receive priority. Our three primary evaluation criteria relate to the quality of the data going forward: How well will the design cover the complete population of interest? How well will the design achieve and maintain an adequate response rate? Will sample sizes permit sampling precision that is adequate for the principal uses of the survey data? Given concerns about potential biases in the current sample, criteria 1 and 2 take the highest priority. Our other evaluation criteria touch on the usefulness of the future (and past) data:
OCR for page 20
Will the design support longitudinal data analysis (e.g., analysis of career changes by individuals) both before and after 2000 and into the future? Will pre- and post-2000 aggregate results be comparable enough to support analysis of trends over time? Will the design provide information that can be used to better understand (and, perhaps, adjust for) biases in the data for the 1990s? We consider these criteria of secondary importance because each one really matters only to the extent that the quality of future data is high. Although cost is not one of our explicit evaluation criteria, we recognize that cost will constrain parameters of the final design choice. COMPARISON OF THE 2000 AND 1990 CENSUS FRAME OPTIONS Before discussing the merits of the hybrid option, we compare the two basic options. Each—the 2000 census frame option and the 1990 census frame option—possesses certain advantages relative to the other. By replacing the 1990 sample with one chosen from the 2000 census, the 2000 census frame option immediately removes the systematic coverage gaps in the current sample that have accumulated between 1990 and 2000. Thus, the 2000 census frame option will eliminate whatever bias has developed due to cumulative nonresponse. Indeed, the nonresponse problem for the 1990 census frame option may increase dramatically in 2003 because more than 4 years will have passed since the last survey— making recontact more difficult than in previous surveys. Although the plan for the 1990 census frame option includes strategies to fill in coverage gaps and to reduce nonresponse bias, we expect those to have limited success. Screening with the 2000 census for recent immigrants who meet the SESTAT criteria (groups (3) and (4) detailed above) may be effective. However, we expect that screening for new college graduates in non-S&E fields who work in S&E occupations (group (2)) and for college graduates in non-S&E fields who converted to S&E occupations after April 1993 (group (5)) would be very expensive.5 Finally, the 5 It would be an expensive, inefficient use of resources to sample members of group (2) with a probability comparable to that used in the 1993 NSCG. Naively, that would require
OCR for page 20
proposed attempts to obtain responses from nonrespondents to the 1993 NSCG are unlikely to produce much yield due to the difficulty of tracking over a 10-year period and converting persons who may have previously refused to participate. In contrast to these disadvantages, the 1990 census frame option offers certain relative advantages. First, it would increase the scope of potential longitudinal analyses. Unlike the 2000 census frame option, the 1990 census frame option would support longitudinal analyses that span the 1990 and 2000 decades, and it would support analyses covering longer periods within the careers of individual scientists and engineers. Although the 2000 census frame option eliminates any potential for current short-term longitudinal analyses, if it is conducted to obtain a high response rate it should improve the quality of those performed on future data. The committee is unaware of much use of the longitudinal data available in the 1990s, so the temporary loss of that capability may not matter much. Of course, there is the possibility that interest in longitudinal analyses based on SESTAT data may increase in the future. Second, because the 2000 census frame option draws a completely new sample with substantially different methodology than did the 1999 sample, there could be an artificial blip in estimates going from 1999 to 2003, due more to methodology than to real trends. Although the 1990 census frame option promises to produce more stable estimates from 1999 to 2003, this could be mainly because the 2003 data would share most of the biases of 1999. Third, the 2000 census frame option is expensive. SRS staff estimate that the 2000 census frame option would involve a screening rate (ratio of initial sample size to the number of SESTAT-eligible respondents) of almost 3 to 1, and that this would be the largest screening rate of the three options (Westat, 2002a). Of course, as noted above, the 1990 census frame option involves at least some screening of people sampled from the 2000 census, and that rate might also be quite high. Although cost is not one of the primary evaluation criteria for this committee, resource constraints can an effort comparable to the 1993 NSCG. One could possibly reduce costs by 50-75 percent by focusing on college graduates born after 1965 (most born before then would have obtained a degree before 1990). Consequently, for the 1990 census frame option, SRS would need to sample at a much lower rate in practice, implying a much smaller sample and therefore much larger weights. It would probably be even more difficult to obtain an adequate sample size for group (5), because SRS could not screen on age.
OCR for page 20
affect sample sizes and, therefore, sampling variability of the estimates produced from a design. However, even if the 1990 census frame option would make more efficient use of resources, that efficiency does not necessarily translate into a reduction in sampling variability because using any cost savings to increase the size of the main components of the sample (the 1993 NSCG and subsequent NSRCGs) is not an option at this time. Furthermore, it is quite possible that the savings from the avoidance of large-scale screening in the 1990 census frame option would be offset by increased costs of maintaining adequate response rates. In addition, recontacting members of the 1990s panel raises a concern about the informed consent procedure in 1999: it appears that at least some participants were informed that they would not be contacted again. After weighing the pros and cons of these two options, the committee concludes that the 2000 census frame option is a much stronger design than the 1990 census frame option. The most important of the evaluation criteria are those involving data quality. Because the 2000 census frame option provides a direct and effective solution to the problems of the current sample by selecting fresh participants, it should produce much more accurate data in 2003 and into the future. Given our judgment about data quality, the apparent advantages of the 1990 census frame option seem questionable. Based on all these considerations, we believe that it is better to “cut the cord” and collect the best possible data for 2003 and later years. COMPARISON OF THE 2000 CENSUS FRAME OPTION AND THE HYBRID OPTION Because the hybrid option is a mixture of the 2000 census frame option and the 1990 census frame option, almost all of the above discussion is relevant for a comparison of it with the 2000 census frame option. Relative to the 2000 census frame option, the hybrid option shares most of the advantages and disadvantages of the 1990 census frame option—only to a lesser extent, depending on the mix of the 1990 and 2000 census frame options in the hybrid. If that were the whole story, then the 2000 census frame option would again be the clear choice. However, the hybrid approach should not be dismissed so easily. The availability of data from both census frames offers potential advantages that would be unique to this design. In the end, however, we do not believe that those potential advantages are great enough to overcome the liabilities in continuing the current sample based on the 1990 census frame.
OCR for page 20
SRS has suggested that one potential advantage of the hybrid option is that it may permit an assessment of the bias in the current SESTAT sample (Westat, 2002a: 3-8): If the two samples (current panel and 2000 postcensal sample) produce comparable results, then they can be combined with relatively little loss in efficiency. On the other hand, if the comparisons indicate that estimates from the existing panel are much different than those based on the new sample, then it may be presumed that the older sample is biased. In this case, the new sample can still be used to make unbiased cross-sectional estimates (but with reduced levels of precision). Analysis of the differences might also provide improved nonresponse adjustment methodology that would bring the old panels back into effective use. Assuming that the two samples could be combined successfully, the hybrid approach would allow SESTAT to capitalize on the cost advantages of the 1990 census frame option. The greater cost efficiency of the current panel would allow for a greater overall sample size than in the 2000 census frame option, if the reductions in the cost of screening are not counter-balanced by additional costs of retention. A larger sample size could translate into greater precision. The hybrid option would produce even greater gains for subpopulations of interest (e.g., underrepresented minority groups or small S&E fields) because those groups could be oversampled from the current panel. The committee believes that it is a mistake to try to combine results from two designs when one is known to have flaws of uncertain consequence. If the two samples produce substantially different results, it would be unwise to combine the two samples without any adjustment. However, any adjustment would be a tricky and likely ineffectual undertaking because there would be no way to accurately attribute the observed difference among sampling error and biases in the two panels. To the extent that the adjustment consists of calibrating the 1990-panel results to the 2000-panel results, the effective sample size would essentially be the size of the 2000 panel. In other words, the data for the 1990 panel would be almost completely wasted. Even if the two samples produce very similar results for most outcomes, that finding would not provide complete assurance that there is no bias in the data from the 1990 panel. Consequently, it would still be
OCR for page 20
improper to place much more confidence in the combined results than one would in results based on data from the 2000 panel alone. In short, combining a reduced panel of acceptable quality with another of dubious quality with only a slim possibility of greater understanding of correctable bias is not a wise decision. An additional potential benefit of the hybrid option is that it would allow SRS to analyze aspects of specific survey items. For example, in 2003, SRS will ask about race using a new format that allows reporting multiple races. Because data for the current sample were collected using a single-race format, comparison of old responses with those on the 2003 NSCG would provide information about the effect of the change in format. Similarly, by asking 2003 NSCG respondents about dates and types of all degrees and comparing that information with earlier data, SRS can learn something about the reliability of this information. Although we agree that there are important things to learn about a variety of survey items, we do not believe that the hybrid option is the most appropriate source of data. The Census Bureau is conducting research that will more effectively learn about the effects of changes to the race question. Other methodological questions might be addressed more efficiently by focused reinterview studies or other methods.