The National Science Foundation Authorization Act of 2002 mandated that the director of NSF, in consultation with the director of the Office of Management and Budget (OMB) and the heads of other federal agencies, enter into an agreement with the National Academies to conduct a comprehensive study to determine the source of discrepancies in federal reports on obligations and actual expenditures of federal research and development funding (U.S. Congress, 2003). The legislation directed that the study examine the relevance and accuracy of reporting classifications and definitions; examine whether the classifications and definitions are used consistently across federal agencies for data gathering; and examine whether and how federal agencies use NSF funding reports, as well as any other sources of similar data used by those agencies.
In view of the fact that this committee study had been recently initiated when the legislation was passed, NSF requested and the panel accepted, the task of studying the discrepancy.
The panel prepared an interim report, which should be considered as the basis of this final report. Indeed, highlights of the interim report’s analysis and recommendations have been carried forward into this final report.
The interim report assessed the commitment of the Science Resources Statistics Division of NSF to quality of performance and professional standards and examined aspects of the statistical methodology and accuracy in the SRS portfolio of surveys. Both the interim report and this final report focus on the concept of quality for the NSF R&D expenditure statistics. While there is no commonly accepted definition of quality for surveys, despite over two decades of intense interest in aspects of quality in federal surveys, there is an evolving consensus that the quality of federal statistical data encompasses four components: accuracy, relevance, timeliness, and accessibility (Andersson et al., 1997).
The panel chose to focus the discussion in the interim report primarily on the dimension of accuracy. As defined by the OMB, accuracy includes the measurement and reporting of estimates of sampling error for sample survey programs as well as the measurement and reporting of nonsampling error, usually expressed in terms of coverage error, measurement error, nonresponse error, and processing error. The OMB working paper concludes “it is important to recognize that the accuracy of any estimate is affected by both sampling and nonsampling error” (U.S. Office of Management and Budget, 2001:1-2).