1. To assist in this, the NRC should redesign the final report and the Research Advisor’s evaluation form to maximize the collection of data from these instruments (see Box 3-1 and Box 3-2 for suggested questions).

  2. The final report and the Research Advisor’s evaluation should be made mandatory.

  3. Some elements of the current data collected could be subjected to further analysis.

    1. For example, NIST may wish to conduct further analysis on peer-reviewed journals, for example by:

      1. asking whether the RA was sole or lead author,

      2. examining whether RAs publish with NIST staff, and

      3. examining the quality of the journals in which RAs publish, although this requires some ranking of journals.

    1. NIST may wish to conduct an impact analysis of RAs’ productivity, for example by:

      1. conducting a citation analysis to see how often RAs’ publications are referenced by others (note this can be accomplished using citation indexes), or

      2. assessing the type or size of grants postdocs receive.

    1. NIST may wish to conduct a more thorough review of their support of RAs, asking how familiar they are with NIST administrative offices, how often they turn to those offices for help, and for what reasons.

  1. NIST could also conduct a social network analysis of the collaboration of the RAs (or of NIST employees) to see how the Research Associateship Program facilitates new or wider collaboration among scientists and engineers.

  2. When data allow, NIST could consider disaggregating productivity and satisfaction measures for RAs by lab, gender, and race/ethnicity.

  1. NIST should conduct a broad evaluation of the careers of former RAs to evaluate the impact of the Program on RAs’ careers, NIST, and the broader science and engineering community. The best approach for doing this is a survey, which would compare the career outcomes of NIST/NRC RAs to similar postdocs. The survey would be directed towards these former RAs and a suitable control group. Ideally, two possible comparisons could be made. First, one could construct a peer group. This would consist of a matched or stratified sample of individuals who had postdocs similar to the one at NIST for the comparison group. Although not ideal, one solution would be to take a stratified sample of former RAs from the Fellowships Office’s Directory. This is a census of former RAs; but as noted earlier in the report, many of these individuals could not be found or failed to respond to an earlier survey designed to collect information on their current employment. A second comparison group would consist of similar doctorates. A roster could be assembled by tapping the group of applicants to RAPs, who did not receive an award. These individuals will likely exhibit a diversity of career paths, including some who took postdocs (in academia or industry) and others who went straight into employment.47

47

An alternative approach is to construct a comparison group for the NSF’s Survey of Doctorate Recipients by identifying a group of former postdocs. For an example of a report that uses this approach, see: Oak Ridge Institute for Science and Education, 2003.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement