National Academies Press: OpenBook
« Previous: Executive Summary
Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×

MEASURING RESEARCH AND DEVELOPMENT IN THE U.S. ECONOMY

The Panel to Review Research and Development Statistics at the National Science Foundation was convened in January 2002 by the Committee on National Statistics of the National Research Council (NRC) at the request of the National Science Foundation (NSF) to conduct an in-depth and broad-based study of the research and development (R&D) statistics program of NSF. The data are collected, analyzed, and released by the Division of Science Resources Statistics (SRS), a federal statistical agency in NSF. The panel was asked to look at the definition of R&D, the needs and potential uses of R&D data by a variety of users, the goals of an integrated system of surveys and other data collection activities, and the quality of the data collected in the existing SRS surveys.

The panel’s overall review examines the portfolio of R&D expenditure surveys as a total system, finds gaps and weaknesses, and identifies areas of missing coverage. This interim report presents our findings and conclusions on specific matters of statistical accuracy and reliability regarding the present array of surveys, as well as interim recommendations on near-term improvements that should be considered and could be implemented by NSF in developing plans and making resource decisions in the near term.

In our final report, we will make recommendations for identifying and defining R&D activities and the appropriate goals for an integrated R&D measurement system, as well as recommendations on methodology, design, resources, structure, and implementation priorities.

The panel’s work in this interim report benefited from a background paper prepared by Barbara A.Bailar, which appears in this volume as Appendix A. This paper presented quality profiles of the five major R&D surveys conducted by SRS:

  1. the Survey of Federal Funds for Research and Development;

  2. the Survey of Federal Science and Engineering Support to Universities, Colleges, and Nonprofit Institutions;

  3. the Survey of Research and Development Expenditures at Universities and Colleges;

  4. the Survey of Science and Engineering Research Facilities; and

  5. the Survey of Industrial Research and Development.

SCOPE OF THE INTERIM REPORT

The main focus of this report is on quality, beginning with an assessment of the commitment of the Division of Science Resources Statistics to quality and professional standards. However, there is no commonly accepted definition of quality for a survey, despite over two decades of intense interest in aspects of quality in federal surveys. A consensus is growing, however, that the quality of federal statistical data encompasses four components: accuracy, relevance, timeliness, and accessibility (Andersson et al., 1997).

In our final report we will address each of these components of quality for each of the NSF R&D surveys, stressing the relevance or usefulness of the information. In this interim report, we focus on aspect of the concept of accuracy. The panel adopted this perspective on the accuracy component of quality in developing our assessment of the quality of the NSF R&D surveys.

Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×

ESTABLISHING AN ENVIRONMENT FOR QUALITY

A commitment to quality and professional standards of practice are two recognized elements in the effective operation of a statistical agency (National Research Council, 2001:5). The commitment should involve several key elements: use of modern statistical theory and sound statistical practice, strong staff expertise, an understanding of the validity and accuracy of the data, and quality assurance programs that include documentation of concepts and definitions, as well as data collection methodologies and measures of uncertainty and possible sources of error. In view of the importance of the overall environment for quality, early in our work we reviewed these elements of organizational and management commitment to quality.

As a benchmark, the panel turned to the earlier NRC study Measuring the Science and Engineering Enterprise: Priorities for the Division of Science Resources Studies (1999) and evaluated the progress that SRS has made to implement the recommendations for quality improvement in that report. That report found that SRS had a good track record of improving the data quality and meeting statistical standards in the recent past, and it recommended additional steps to ensure that standards are met across SRS operations. With respect to the R&D expenditure surveys, the previous NRC report recommended three specific steps to improve data quality: (1) require all contractors provide methods reports that address quality standards; (2) continue recent efforts to provide NSF staff with professional development opportunities to improve statistical skills; and (3) continue to develop and strengthen a program of methodological research, undertaking rigorous analysis of the data collected to assess the quality of the data relative to concepts they are supposed to measure. These issues will be addressed further in the panel’s final report.

Since the 1999 NRC report was published, SRS has implemented several specific improvements and has laid the groundwork for others. The change of name to the Division of Science Resources Statistics is one indication of the commitment of management to improving the statistical foundation of the R&D data. Other specific actions that have improved or will lead to improvements include the addition of mathematical statistics expertise to the SRS senior staff, the incorporation of specific contractual obligations for measuring and presenting measures of quality in contracts with the data collection organizations responsible for conducting the several survey programs, and development of a long-term plan for methodological research for each program.

Mathematical Statistics Expertise

The 1999 NRC report focused considerable attention on perceived staffing issues in SRS, calling for enhancing staff expertise through professional development activities, augmenting staff expertise in statistical methods and subject-matter skills, and developing a more interactive relationship with outside researchers. Over the past two years, SRS has added four full-time senior statisticians and has secured the services of experts in survey design and cognitive aspects of questionnaire design. These staff and consultant enhancements have served to focus attention on statistical methods for the R&D surveys.

Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×

Contractual Obligations

SRS has incorporated provisions for measuring and reporting on statistical quality in contracts with the vendors responsible for data collection and estimation. For example, the interagency agreement with the U.S. Census Bureau covering survey operations for calendar year 2004 requires that the Census Bureau prepare and maintain complete and detailed survey methodology and technical documentation, and that it recommend quality improvements and methodological enhancements resulting from information gained during previous survey sampling and processing.

Long-Term Plan for Methodological Research

The multiyear plan for methodological research on the survey of R&D in industry is perhaps the most advanced of these efforts to specify a roadmap for near-term and long-term statistical improvements. In a presentation at the panel’s July 2003 workshop, Ray Wolfe of the SRS research and development statistics program outlined a research agenda and plan of action for the R&D industrial survey. The plan is specified in the SRS Quality Improvement and Research Project List. The list was developed with input from, among others, the OMB clearance process, the SRS staff, the NRC’s Committee on National Statistics and Board on Science, Technology, and Economic Policy, and SRS advisory committees.

The Quality Improvement and Research Project List that was current at the time of the panel’s workshop included 30 items, 9 of which were presented in some detail to the panel. Three of the research items were designed to address OMB clearance conditions: record-keeping and administrative records practices; effects of mandatory versus voluntary reporting on the state items; and web-based survey operations. Other items on the list of nine key activities include research on data collection below the company level; cognitive research on survey forms and instructions and identification of appropriate respondents; survey sample design issues; survey processing (data editing, cleaning, and imputation); research on disclosure and confidentiality issues; and electronic documentation system for respondent contacts. Each of these research agenda items is discussed in this report.

There is considerable evidence that SRS has adopted organizational and administrative changes to improve the environment for data quality. The inauguration of these efforts was independent of any change in overall program resources. It is encouraging that the initiatives have been enhanced by the increased availability of staff to support the collection and publication of R&D expenditure data. The panel observes that overall resources for these programs increased from $2.9 million in 2002 to $4.3 million in 2003.

The panel concludes that significant steps have been taken by the Division of Science Resources Statistics to foster an environment for improvement of data quality. The panel is hopeful that these recent initiatives, buttressed by additional resources and supplemented by further initiatives such as those outlined in this report, will lay a basis for further improvements in the future.

COMMON TECHNICAL ISSUES

Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×

Although united by common criteria and definitions, the NSF R&D surveys are a diverse set of data collections, which follow three fairly distinct formats.

  • The Survey of Industrial Research and Development is a sample survey of companies, with the survey frame drawn from the Census Bureau’s Business Register.

  • The Survey of Federal Funds for Research and Development and the Survey of Federal Science and Engineering Support to Universities, Colleges, and Nonprofit Institutions are surveys of federal agencies and use a frame constructed primarily from information in the president’s budget as submitted to Congress and as included in OMB budget documents.

  • The Survey of Research and Development Expenditures at Universities and Colleges has coverage of the largest academic institutions in terms of defined R&D activity, and the Survey of Science and Engineering Research Facilities utilizes the same institutional list as the R&D expenditure survey and an additional list of biomedical institutions maintained by the National Institutes of Health (NIH).

There are some commonalities. The latter four surveys have utilized a common contractor, ORC Macro, and share some similar data collection and processing procedures. For example, ORC Macro has deployed a web-based system that encompasses systems for data collection, data monitoring including receipt control, and data editing. Other procedures in common include the practice of providing prior-year data to respondents; the use of cutoff amounts to define the universe; the designation of respondent, nonresponse adjustment, and imputation; and methodological reports.

Web-Based Collection

The four federal and college and university surveys are quite modern in their approach. Few other federal data collections have so heavily employed web-based collection, which is used in the great majority of cases in these surveys. (A few respondents are not able or willing to respond electronically, so a manual system is also maintained.)

Many of the traditional chores of survey operations are eliminated in web-based collection. The survey is delivered electronically, rather than in paper format through the mail. The respondent completes an electronic survey form. Follow-up is by e-mail or telephone rather than postal mail. Logs of progress of the agencies and institutions in reporting are generated by the system electronically. The web-based questionnaire enables embedded edits that question the entry of erroneous data so that data checks, repairs, and explanations can be provided by the respondents, and the data are tabulated by the web-based system. In this family of surveys, the target population is small, and the processing environment is simplified in that there is no sample, no nonresponse adjustment by weighting, and no other weighting.

The very newness of the web-based system and the employment of automated functions raise several red flags in terms of data quality. For one thing, the system can force respondents to provide data that they are unsure about, simply to complete the entry. In this sense, it forces imputation by the respondent, rather than by NSF. In more traditional survey approaches, the failure to complete a data item may signal a problem with questionnaire design, concepts, or definitions. These useful signals are masked by self-imputation. Another consequence of forcing an entry is that traditional measures of nonsampling error, such as item nonresponse and imputation rates, may be artificially low; statements in NSF publications that “item nonresponse

Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×

is not viewed as a problem” because item nonresponse rates are under 1 percent may be quite misleading as a measure of overall quality.

Against this backdrop, OMB has challenged NSF to explore the possibility of web-based collection in the larger and more stylized Survey of Industrial Research and Development. The terms of clearance of the survey, when it was last approved by OMB in January 2002, require that “NSF will initiate a web-based version of this survey by the next submission of this clearance request. If NSF is unable to complete a web-based version upon resubmission, NSF must provide justification as to why web-based surveys can not be implemented in this instance, and document the steps being taken to allow full electronic submission of this survey” (U.S. Office of Management and Budget, 2002:2).

NSF and the Census Bureau have introduced several technological initiatives for data collection over the years. For example, firms using the short form (RD-1 A) that report no R&D activity can fulfill the reporting requirement by use of toll-free touch-tone data entry. Nearly 90 percent of such respondents in 1998 and 1999 used the touch-tone system. In addition, the Census Bureau utilizes a computerized self-administered survey system, which allows respondents to complete the survey form electronically and submit it on a disk. Approximately 20 percent of firms that were sent both a paper form and a diskette in 1999 chose to report using the diskette. Although these initiatives offer reduced respondent burden and facilitate data processing at the Census Bureau, they fall short of representing the kind of data collection and processing improvements offered by the kind of web-based collection utilized in other NSF surveys. In essence, the current initiatives represent merely an automation of the old way of doing business.

Clearly, an intensive regime of development and testing of further automation of data collection, processing, and estimation is required. In keeping with the OMB recommendation that research should focus on web-based collection from businesses, a proof-of-concept study was conducted by the Census Bureau in the 1990s (U.S. Census Bureau, 1998). Work on web-based collection should consider the dynamic aspects of questionnaire design and processing discussed in a recent Committee on National Statistics workshop on survey automation (Tourangeau, 2003). Such a study needs to be developed and conducted in SRS. A comprehensive feasibility study for an effective web-based instrument requires both use of interactive contact with individual respondents, as recommended in this report, and a carefully designed field pilot study.

Provision of Prior-Year Data to Survey Respondents

Many of the NSF surveys provide data collected in prior-year responses on the current year collection instruments. The industry survey imprints prior-year data on the RD-1 form and incorporates the information on diskettes forwarded to the companies under the Census Bureau’s computerized self-administered survey option. For the other surveys, prior-year data are available to respondents on request or automatically.

There are potential strengths and drawbacks to presenting historical data for a current reference period. A paper by Holmberg (2002) outlines motives that include, on the positive side, increasing the efficiency of the data collection, correcting previous errors, reducing response burden, and possibly reducing measurement errors and spurious response variability. Arrayed against these advantages are the possible drawbacks that preprinting can conserve errors rather than correct them, and that the practice might lead to underreporting of changes from one

Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×

period to another. Holmberg’s experimental study of preprinting values on a self-administered questionnaire for business establishments by Statistics Sweden found that questionnaires with preprinted historical values outperformed questionnaires without preprinted values on several measures of data quality. These data quality improvements included (not surprisingly) a significant decrease in response variation in overlapping reference periods, a reduction in spurious reports of change, and less coding error and editing work. He further pointed out that some of the potential drawbacks were not realized (for example, the entry of the same value for different months was more common without preprinting), and there was no indication that change was underestimated with preprinting.

The favorable results evidenced in Holmberg’s research suggest that there is little danger in continuing this practice in the NSF surveys. However, there is the possibility that his results would not be replicated in U.S. surveys conducted with the spectrum of methods used in the NSF surveys. The panel notes that the Census Bureau has initiated testing of the impact of preprinting on its Survey of Residential Alterations and Repairs (personal communication, D.K.Willimack, U.S. Census Bureau, September 2, 2003).

The panel urges NSF to sponsor research into the effect of imprinting prior-period data on the industrial R&D survey in conjunction with testing the introduction of web-based data collection. The traditional means of testing the impact of differential data collection procedures using randomized split samples and respondent debriefings should be considered for this research.

Designation of Respondent

A major challenge in all of the NSF surveys is to ensure that the survey questionnaire will get to the appropriate and knowledgeable (and motivated) respondent. The role of the data provider is central to all of the surveys, and it is handled very differently from survey to survey.

The Survey of Science and Engineering Research Facilities identifies one person as an institutional coordinator. The survey recognizes that this person is critical to the success of this survey, which must be transferred among various parts of the institution. The institutional coordinator decides what others should provide, what sources of data to use, and who coordinates the pieces of the effort, then reviews the data. This plan appears to generally work well for this survey.

In sharp contrast, the Survey of Industrial Research and Development is mailed to a company, with no contact person designated. Often the form is sent to a central reporting unit at corporate headquarters, with respondent “generalists” fanning out the collection to various organizations and functions within the corporation. There is little attempt to develop a standing relationship with these key reporters or to educate them on the nuances of the data collection.

The role of the data provider is critically important to the success of a survey. We note that, in response to discussions at a meeting of panel members with NSF and data collection staff in April 2003, it has been proposed that the NSF Quality Improvement and Research Project List research agenda will document research contacts for the industry R&D survey.

The panel supports the initiative to identify individual respondents in companies as a first and necessary step toward developing an educational interaction with respondents so as to improve response rates and the quality of responses.

Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×

Nonresponse Adjustment and Imputation

The surveys have very different approaches to nonresponse and imputation. The federal funds and college and university surveys attempt to eliminate the problem of unit nonresponse completely, by seeking 100 percent compliance from the universe of reporters. Item nonresponse is actively discouraged, either by making it difficult not to enter an item in a web-based report, or by encouraging reporters to estimate information when actual data are not available, as is the practice on the federal funds report. Thus there is little unit or item nonresponse in this family of surveys, although, as mentioned earlier, some of the methods of achieving these response rates have a questionable overall impact on data quality, and their impact needs to be investigated in an evaluation study.

The Survey of Industrial Research and Development has fairly significant response challenges. Of the companies surveyed in 2000, about 15 percent did not respond. Moreover, the means of counting and then treating nonresponse in this survey raises statistical issues. The Census Bureau lists a company as responding when it has heard back from the company, even if the company reported that it was out of scope, out of business, or merged with another company. In other words, a response is a form returned, not necessarily information received. The Census Bureau then imputes values to data items representing these nonresponding units. The panel is concerned about this procedure, in that it does not follow acceptable practices for reporting nonresponse, and we are concerned about the impact of this practice on both the reported nonresponse rates and the estimates. The reported values for item nonresponse rates for key data elements in the survey are also quite problematic. Imputation rates are published, but they are a poor proxy for item nonresponse rates.

The panel recommends that NSF, in consultation with its contractors, revise the Statistical Guidelines for Surveys and Publications to direct standards for treatment of unit nonresponse and to direct the computation of response rates for each item, prior to sample weighting. When these guidelines have been clearly specified, the panel expects that the Census Bureau and other consultants would adopt these standards. With clear specification by the NSF and adoption by the contracting organizations, information on true unit and item nonresponse can be developed and assessed.

Survey Documentation

The methodological reports from the various surveys vary widely in their completeness. The SRS guidelines are quite clear on the policy for survey documentation in a methodological report expected at the completion of each survey. There is some evidence that recent methodological reports have improved in terms of depth and focus (see, for example, Quantum Research Corporation, 2002). The means of ensuring continued improvement in survey documentation is specified in the SRS guidelines-that is, survey contracts must include a requirement for a methodology report covering items addressed in this standard (National Science Foundation, no date).

In the sections that follow, we consider each of the five surveys individually. Our examination is based largely on the analysis in Appendix A.

Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×

SURVEY OF INDUSTRIAL RESEARCH AND DEVELOPMENT

In complexity and impact, this is the foremost of the surveys in the R&D expenditure survey portfolio. The survey has the longest history and currently absorbs about one-third of RDS program resources.

The share of U.S. R&D activity represented by business and industry now totals about 75 percent of all R&D activity, so it is critically important to get this survey right. There are a number of conceptual and methodological issues with the survey that will be addressed in the final report. This section explores some of the most challenging issues in statistical methodology.

Following a major revision in this survey in the early 1990s, changes to the survey in the past decade have been piecemeal and incremental, impeded by the lack of resources to modernize the survey operations. Today’s industrial R&D survey stands in contrast to the other surveys in the NSF portfolio, in that the statistical methodologies and technologies employed in the survey are far from cutting-edge.

Survey Design

The survey covers all companies doing business in the United States. The survey has been conducted by the Census Bureau, utilizing its list of nonfarm private companies. It is a large and growing survey, with 31,200 companies in the 2003 sample, up from 24,200 in 2001. It is also a changing survey, increasing coverage to smaller and smaller firms, expanding coverage in nonmanufacturing sectors, and accommodating changes in industrial classification structures over the years. The basic sampling frame is the Business Register, previously known as the Standard Statistical Establishment List. This list, used since 1976, has problems in coverage and currency that will be examined in more detail in the panel’s final report. For now it should be mentioned that the Business Register may have particular problems with completeness of coverage of small firms; this potential undercoverage of small firms may be an important issue for measurement of R&D expenditures by type or by state. In view of the growing recognition that small firms are increasing as a share of research and development spending, a problem of undercoverage of small firms may well lead to a growing problem of underestimation (Acs and Audretsch, 2002).

The sampling procedure has also evolved, while maintaining a constant focus on companies with the largest R&D expenditures (1,700 companies in 2000), with additional strata defined by industry classification and various expenditure-level cutoff schemes. The largest firms were first identified by the number of employees, and later by the total R&D expenditures based on the previous year’s survey. The industry strata are defined as manufacturing, nonmanufacturing, and unclassified, with 48 categories represented in the 2002 round. To improve state estimates, take advantage of historical data, and improve industry-level estimates, the sample was further divided in 2001 to represent “knowns” and “unknowns.” The known segment is comprised of companies that reported R&D expenditures at least once in the previous three survey cycles. These various sampling changes, each introduced to achieve a worthwhile objective, have made it very difficult to achieve control over sampling error in the survey estimates. The variability of the estimates has been a constant concern in the survey. Some of the sampling errors, as computed, remain very high, particularly for the nonmanufacturing universe. High sample errors and high imputation rates for some of the key companies mean that

Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×

the quality of the published data are suspect, and suggests the need for evaluation of statistical underpinnings of the survey. This interim report focuses on several specific issues, while broader issues will be discussed in the final report.

Data Collection

The method of data collection currently relies on two forms: the RD-1, sent to known, large R&D performers, and the more limited RD-1 A, sent to small R&D performers and companies that have not previously reported. Both surveys collect sales or receipts, total employment, employment of scientists and engineers, R&D expenditures information, character of the R&D (basic, applied research, development), R&D expenditures in other countries, and R&D performed under contract by others. In addition, the RD-1 forms collect information on federally funded R&D by contracting agency, R&D costs by type of expense, domestic R&D expenditures by state, energy-related R&D expenditures, and foreign R&D expenditures by country.

Although a diskette is provided and a web-based version of the form is available for completion and mail-in, the primary mode of data collection is by postal mail. Survey forms are mailed in March, with a requested completion date of 30 or 60 days later. Mail follow-up is fairly extensive, with a total of five mailings to delinquent Form RD-1 A recipients, but telephone follow-up is limited by resource constraints to the 300 largest R&D performers. If no response was received and no current year data reported, several data items were imputed. The data collection procedures employed by the Census Bureau for the industry survey stand in stark contrast to the more technologically advanced procedures employed in the smaller federal and academic surveys, which, for the most part, are web-based and have more intensive education, response control, and follow-up schemes.

The printed questionnaire is in dire need of review and substantial revision. The last full-scale review of the cognitive aspects of the industry R&D survey was reported in 1995 by Davis and DeMaio. This internal Census Bureau study identified a number of possible improvements in graphics and question wording in the questionnaire. Some of the graphical suggestions have been implemented, but a key recommendation that all survey items be put into question format has not yet been acted on. The wide range of suggestions by Davis and DeMaio for possible question wording changes were based on the level of respondent understanding of the concepts and definitions implied in the questions.

The issues developed by Davis and DeMaio (U.S. Census Bureau, 1995) strongly support the need for an ongoing dialogue between the data collectors and data providers in the industry R&D survey. An active program of respondent contacts and recordkeeping practice surveys that would have supported a dialogue was dropped in the early 1990s because of resource constraints. Such a program need not be expensive or overly intensive. Davis and DeMaio were able to obtain valuable insights into cognitive issues with just 11 company visits and a mail-out study of some 75 companies that incorporated cognitively oriented debriefing (COD) questions.

The panel strongly recommends that the National Science Foundation and the Census Bureau resume a program of field observation staff visits to a sampling of reporters to examine record-keeping practices and conduct research on how respondents fill out the forms. In this recommendation, the panel adds its voice in support of the OMB directive, which gave approval of the industry R&D survey in 2002 with the proviso that “a record-keeping study

Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×

should be done to find out what information businesses have regarding the voluntary items and the reasons for nonresponse to those items” (U.S. Office of Management and Budget, 2001).

The problem of nonresponse, however, does not apply solely to items collected on a voluntary basis. As with all surveys, some sample units do not respond at all, or they omit some items. The response rate from 1999 to 2001 was in the 83 to 85 percent range. When there is additional attention, as is the case with the largest 300 companies, response rates can be higher. The rate for these large companies was 90 percent in 2000. The impact of new rules requiring mandatory reporting in the 2003 survey cycle is not yet known.

Item nonresponse is also a problem, for both the five mandatory data items and the voluntary items. It is difficult to assess the scope and impact of the problem of nonresponse, since no item nonresponse rates are given for any items. Instead, as mentioned earlier, the Census Bureau publishes imputation rates, which can be quite large and serve as a poor proxy for item nonresponse rates. NSF could make a significant contribution to understanding the quality of the data by ensuring clear definitions and regular reporting of item nonresponse rates. Furthermore, the recent rise in the proxy measure-imputation rates-gives cause for concern and impetus to the need to understand the quality impact of item nonresponse.

Survey Content

Closely associated with issues of data collection and the cognitive aspects of survey design are issues of data content and related questions of usefulness to potential data users. Most of these issues will be deferred to our final report, but a few are worth mentioning in the present context of statistical accuracy.

One problematic aspect of the survey content brings into question the validity of data that break down R&D into the components of basic research, applied research, and development. Davis and DeMaio found that R&D professionals and support personnel have a good understanding of these concepts, even though they do not necessarily classify their own work that way (U.S. Census Bureau, 1995). However, the persons most often charged with responding to the questionnaire are in the financial or government relations offices of the companies, and their understanding is less likely to be accurate. This finding was affirmed in panel discussions with representatives of reporting companies and covered industries. Although definitions of these terms are included in instructions transmitted with the forms, prior to the 2003 survey they were not included on the form and thus may have been overlooked. Abbreviated instructions have been included in a revised 2003 questionnaire being introduced in March 2004.

A second content problem relates to estimates of the number of research and development scientists and engineers employed by the company, which derive from the definition: scientists and engineers with a four-year degree or equivalent in the physical and life sciences, mathematics, or engineering. Aside from the obvious problem of transferring the form between the financial office of the company, where the form is completed, and the personnel department, where personnel records are kept, the definition is in some aspects quite vague. What does “equivalent” to a four-year degree mean? Even if it is possible to identify field of educational specialization, what is a person’s field of employment in the company? When an employee works on multiple tasks, how should they be apportioned?

Identification of the location of R&D activity by state is also problematic. Although there is intense interest in the location of R&D activity, there is anecdotal evidence that respondents have great difficulty in accurately allocating this activity to geographic areas, and

Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×

consequently they have developed measures for geographic allocation that produce data of questionable quality. NSF needs to conduct a more intensive study to determine the quality of state breakdown of R&D activity and to implement changes if warranted.

Processing

According to a series of memoranda by D.Bond that discuss research on various sources of processing errors (U.S. Census Bureau, 1994), data entry procedures produced little error, but the editing process was replete with potential for error. That potential starts with the observation that there is no written description of the editing process, including the process in which an analyst supplies data codes. This study is now a decade old. Although many of the sources of editing error may have been corrected, the absence of a more recent study of processing error and the lack of current documentation of the editing process cause concern over the impact of this source of error.

The panel recommends that the editing system be redesigned so that the current problems of undocumented analyst judgment and other sources of potential error can be better understood and addressed. This redesign should be initiated as soon as possible, but it could later proceed in conjunction with the design of a web-based survey instrument and processing system.

Imputation is an integral part of the survey operation, and the rates of imputation are high. Again, Bond’s studies found several sources of potential error in an imputation process that varies with the item being imputed. There should be an ability to clearly determine whether errors arise in the editing or the imputation processes.

Estimates for R&D expenditures by state are an important output of the NSF R&D expenditure program. The estimation of state R&D expenditures has been a difficult aspect of the survey for some time. The problem arises because the sample of companies in the states can vary considerably from year to year. A noncertainty sample reporter that was not in last year’s sample may be selected and report a significant amount of R&D in one year in that one state. If the new reporter is large enough, it will be retained with certainty for the next year but can be an outlier in the current year and cause a spike in the estimate for a state. Until 1997, NSF was able to use an outside database to supplement data from the survey in order to improve and stabilize the state estimates. These outside data are no longer available. Other measures have been implemented to stabilize the sample, such as continuing the top 50 R&D companies in each state from year to year.

The most recent attempt to improve the estimation of state R&D expenditures has been the development of a new composite estimator, in which the first term is the unweighted contribution to R&D coming primarily from certainty stratum cases, and the second term is a ratio estimate of the contribution from the noncertainty sample units. The panel commends the National Science Foundation and the Census Bureau for developing this composite estimator, which takes into account research on small-area estimation. However, we recommend that additional simulations be conducted to assess the bias, variance, and mean square error of these new state estimates. In addition, future research could profitably explore alternative estimators for handling outliers, drawing on the literature on finite population estimation.

SURVEY OF FEDERAL FUNDS FOR RESEARCH AND DEVELOPMENT

Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×

The Survey of Federal Funds for Research and Development is a virtual census of federal agencies that support national scientific activities. It consists of collection of some data that have rigid definition and strict accounting measures and others, such as current and future year obligations, that are best estimates on the part of agency reporters. The survey data link to the budget figures submitted to OMB in January each year. The census is largely automated, and reporting forms are provided by a web-based system with processing and editing done electronically. The survey is conducted by ORC Macro.

Survey Design and Implementation

There is no sampling, hence no sources of sampling error. Coverage includes all relevant federal agencies and subagencies that fund R&D activities, so there are no apparent coverage issues.

Data Collection

The trigger for inclusion in the annual data collection round is reporting of R&D activities in OMB budget documents or in the media. Rather intense effort is given to identifying respondents in the selected agencies and contacting respondents at multiple intervals—before, during, and after data collection. This is not a full-time job for respondents, and many of them are replaced from year to year, so reeducation is a constant challenge. Data collection is tiered, in that larger R&D agencies are asked to provide data on obligations to states and to colleges and universities by field of science. The response rate for the census is an enviable 100 percent, and there is no item nonresponse since the agency respondents must answer all questions before the data can be submitted.

Lack of timeliness is a continuing issue. In his remarks at the panel’s July 2003 workshop, Kei Koizumi pointed out that data for fiscal year (FY) 2001, which ended on September 30, 2001, were just becoming available in July 2003, over a year and a half later. This delay in assembly and transmittal of the data is particularly troublesome because reporting offices have the data at the end of the fiscal year (interview with Mark Herbst, Office of the Secretary of Defense, March 26, 2003). The lack of timeliness is such a severe deterrent to utility of the data that most organizations that assess R&D spending trends turn to budget authority data rather than obligations and outlays, limiting timely application of some very useful R&D analysis tools, such as the Federal Science and Technology Budget. Although several steps are being taken by the contractor to enhance cooperation and speed data processing, the list of problems inhibiting on-time reporting is a constant challenge to the staff. Speaking at an agency workshop, Ronald Meeks of NSF identified as timeliness issues the lack of support from senior officials in some agencies, the need for constant reeducation of reporters, problems with completion of the automated form, delays from internal review and controls, and timing conflicts with the higher priority president’s budget (Quantum Research Corporation, 1997). In the final report the panel intends to make recommendations for measures to improve timeliness.

Processing

Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×

Editing functions are controlled and expedited by the extensive automation of the process. There is no evidence that any errors come into the process from the edits.

SURVEY OF FEDERAL SCIENCE AND ENGINEERING SUPPORT TO UNIVERSITIES, COLLEGES, AND NONPROFIT INSTITUTIONS

The Survey of Federal Science and Engineering Support to Universities, Colleges, and Nonprofit Institutions is congressionally mandated and is the only source of comprehensive data on federal science and engineering support to individual academic and nonprofit institutions. Like the federal R&D funds survey, this survey is completed by federal agency respondents. At the beginning of each year, a survey form is completed for each college, university, or nonprofit institution for which an agency obligated R&D funding during the previous fiscal year. Obligations to colleges and universities are listed by field of science and engineering activity; obligations to independent nonprofit organizations are listed by R&D expenditures and R&D plant. This survey is conducted by ORC Macro.

Survey Design

The frame for the survey is the list of all federal agencies that sponsor R&D, obtained from the president’s budget submission to Congress. In practical terms, the survey covers respondents to the federal funds survey, focusing on only the largest of the agencies (in FY 2000, 18 agencies were in the target population). Unlike the federal funds survey, this survey is not a census of science and engineering support. Not all agencies are surveyed, and some funding can be missed. While the overall amount of missed funding is not significant, the patterns of funding by agency and recipient may be somewhat distorted by these omissions.

Data Collection

The data collection is web-based, with automated functions supporting all data collection, data imports, data editing, and trend checks. There is no nonresponse from agencies and no item nonresponse, since the forms must be completed prior to transmittal, raising issues that need to be studied, as discussed before. It is possible that coding errors, such as an incorrect institutional code or incorrect branch of a multiunit institution, could lead to errors in the estimates of funding by institution. Matching program descriptions to proper funding categories may cause some confusion on the part of respondents.

Processing

There is no keying or weighting of data, so there is no obvious source of error in the processing stages.

SURVEY OF RESEARCH AND DEVELOPMENT EXPENDITURES AT UNIVERSITIES AND COLLEGES

Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×

This survey is the primary source of information on separately budgeted R&D expenditures in academia in the United States and outlying areas. The annual survey covers the 500 to 700 colleges and universities that have doctoral programs in science and engineering fields or that annually perform at least $150,000 in separately budgeted R&D. The survey population also includes all Federally Funded Research and Development Centers, and all historically black colleges and universities that perform any separately budgeted R&D in science and engineering. The survey is carried out by ORC Macro.

Survey Design

All institutions in coverage with $150,000 or more in S&E expenditures reported in the previous survey are included with certainty. The frame is updated each year by comparing the previous frame for this survey with the list of institutions in coverage for the NSF-NIH Survey of Graduate Students and Postdoctorates in Science and Engineering and the list of academic institutions that receive federal S&E R&D funding as reported by federal agencies in the federal funds survey. The list of Federally Funded Research and Development Centers is maintained by NSF, and the list of historically black colleges and universities is maintained by the U.S. Department of Education. Thus, there is no sampling, as such, but errors may arise with the multisource frame itself and the application of the criteria for inclusion. For example, the frame could easily fail to identify an academic institution with at least $150,000 in R&D if it was not identified in the previous collection or found in the comparison with the graduate student survey. The elimination from coverage of less than four-year colleges is presumed to result in only a very small underestimate.

Data Collection

For the past two collection rounds, the survey has been collected in electronic format only. It has usually been disseminated in November and follow-up activities have taken place from January to July, so the period of intensively soliciting responses consumes over six months. Little is published or analyzed about the characteristics of the respondents in the institutions, although it is known that there is some annual turnover and appropriate completion of this fairly complex questionnaire usually requires obtaining data from multiple sources within an institution, as well as full understanding of concepts and definitions such as field of science and engineering. Veteran respondents do participate in periodic workshops to identify items that are troublesome. The panel is concerned about the lack of profile information about the respondents and the limited training or retraining of these respondents as part of ongoing survey operations. On an ongoing basis, NSF should contact a sample of respondents to check their records in order to improve understanding of the best means of gathering the data, the sophistication of reporting sources within an institution, and the interpretation of questions and definitions.

It was noted that, by the closing date of October 2002 for the 2001 survey, completed questionnaires had been received from 95 percent of the academic institutions, including 100 percent of the top 100 institutions and Federally Funded Research and Development Centers. As was the practice with the ORC Macro surveys, all missing data items, including those for nonrespondents, were imputed. No item nonresponse rates were reported, so it was not possible

Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×

to assess the quality of the individual items in the report. The printed collection form appears to be quite busy and suffers from a lack of good graphic design. This has been less of a problem in the electronic format, in which questions are interspersed with directions, definitions, and reminders. Concerned about the lack of knowledge about the response patterns, the panel recommends study of the cognitive aspects of collection instruments and reporting procedures.

Survey Processing

Imputation is a significant issue for this survey. In 2001, imputation was used to provide information for a small proportion of the survey population (4 percent) that did not provide information at all, as well as for item nonresponse. The imputation factors were generated by class of institution and derived from responding institutions for three key variables: total R&D expenditures, federally financed R&D expenditures, and total research equipment expenditures. These factors were used along with the previous year’s data. This methodology has led to large misspecification, especially when imputation is needed for a number of years. The estimates for basic academic research have been especially troublesome, since the response rate for this item has been in the range of 83 percent. In FY 2001, NSF needed to correct the “federal basic research” and the “total research” estimates by substantial amounts because of revisions in a large university’s basic research spending—a number that had been imputed for 15 consecutive years.

NSF has conducted promising research to improve the imputation procedures. A memorandum by Brandon Shackelford of ORC Macro outlined a promising approach for utilizing a regression model for imputation of the basic research totals, which has been subjected to initial tests (Opinion Research Corporation, no date). Although the panel welcomes this research into a model-based approach to imputation, we are concerned that the tests were not sufficient to judge the soundness of the regression approach. The research should be redone utilizing a more standard procedure of withholding a set of independent data in order to test the model. The past practice of using imputed data for long periods as a basis for new imputes is particularly dangerous.

SURVEY OF SCIENTIFIC AND ENGINEERING RESEARCH FACILITIES

The Survey of Scientific and Engineering Research Facilities is a biannual survey of academic institutions that includes independent biomedical research facilities with funding from the National Institutes of Health. It is designed to provide rather detailed data on the status of R&D research facilities. This congressionally mandated survey covers the same academic institution population as the expenditure survey, with the addition of the NIH population of nonprofit biomedical research organizations and independent hospitals identified each year by NIH based on previous fiscal year extramural funding.

Survey Design

This data collection became a census of the eligible population in 1999. The eligible population has shifted somewhat over time. For example, the threshold for inclusion was raised

Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×

from $150,000 to $1 million in 2003, and institutions granting master’s degrees were no longer automatically included in 2001.

The accuracy of the identification of the universe depends on the accuracy of the Survey of R&D Expenditures at Colleges and Universities for the academic sector. Previously discussed errors in reporting and imputation may affect the quality of the list. There is a possibility of double-counting in the case of overlapping coverage between the NSF and the NIH lists.

Data Collection

The key to success of this burdensome survey is the institutional contact at each surveyed institution. For the most part, the input data are not maintained centrally within the institutions, so each institutional contact must determine the most effective data collection approach, work with multiple internal sources of information, and review the data before submission. The institutional contacts are identified in an intensive campaign preceding survey mail-out. They are not necessarily located in one type of office, and many of them change from year to year.

The questionnaire is evolving, with major changes introduced in the past two survey cycles. Many of the changes, the direct result of a cognitive review of the questions, were introduced to provide greater clarity and to remove redundancy. Our review indicates that this evolution process needs to continue.

The new design introduced with the web-based collection has increased the amount of data sought, introducing such questions as the identification of new construction in the previous two years, with a project worksheet to be completed for each individual project. Some of the concepts are new and possibly vague. For example, the new questions on computer technology and cyber infrastructure introduce new collection challenges, given the wide variety of institutional practices for computer and software procurement and inventory. The panel recommends that NSF continue to conduct a response analysis survey to determine the base quality of these new and difficult items, study nonresponse patterns, and make a commitment to a sustained program of research and development on these conceptual matters.

Even with these burdensome data inquiries, the overall response rate in 2001 was about 90 percent for academic institutions and 88 percent for the biomedical institutions. This is a testament to the institutional contact program and the determination of the data collectors. The differences in response rates between public and private institutions is of concern, with smaller rates for the private institutions perhaps the result of traditions and maybe the cause of larger error in their estimates. Some of these issues may be resolved in the current collection of data for 2003, when NSF will publicize data by institution, with only a few sensitive data items suppressed. The data should become more useful as benchmarks, and this procedure should also drive up institutional participation.

Processing

The survey employs web-based procedures, which require that all data items be completed prior to allowing submission of the form. This may force the respondent to enter doubtful data or to impute answers that are not obtainable from organizational records.

Imputation procedures vary for paper-based responses by whether or not the institution previously reported. If the unit previously reported, prior responses were used in the imputation

Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×

procedure; if not, other methods were employed. The exact procedure used by NSF for imputation is not well documented, but it appears that imputation is used for unit nonresponse—a practice that is highly unusual in surveys. In most surveys, unit nonresponse is handled by weighting, as it was in this survey in 1999. At a minimum, NSF is urged to compare the results of imputation and weighting procedures.

PRIORITIES FOR NEAR-TERM AND LONGER TERM IMPROVEMENTS TO STATISTICAL METHODOLOGY

A one-sentence summary of the findings of this interim review is that, overall, the surveys are largely attentive to the use of proper statistical methodology and are moving in the direction of improvement where they are not. Indeed, it could be said that most of the federal and academic surveys are on the cutting edge of technology and that the industry survey largely is based on careful application of known sampling and estimation techniques. This is not to say that serious attention to statistical methodology is not needed. The panel has expressed several concerns and made several recommendations for improvements to statistical techniques and procedures to NSF and the organizations directly responsible for the surveys.

The panel understands that resources for improvements in statistical methodology are scarce and thus must be wisely committed. We also recognize that, throughout their history, these surveys of research and development have been leaders in testing and developing new procedures for survey operations.

In view of the concerns and recommendations presented in this interim report, the panel recommends an active and focused program of testing and development with priority attention to the following matters.

Research on Mandatory Versus Voluntary Reporting

Although the results of the 1991 study on the effect of mandatory versus voluntary reporting on the 1990 RD-1 survey quite conclusively showed that mandatory surveys had a higher response than those conducted on a voluntary basis, the recent conduct of an “all mandatory” survey round in 2002 gives a unique opportunity to capture information and conduct focused research on response characteristics in a mandatory environment. Such issues as the willingness of respondents to respond to otherwise voluntary items and the quality of the responses should be examined by NSF through matching the 2002 survey responses with those of previous and subsequent years, as well as through concentrated response analysis surveys of the reporters for whom response rates appear to be affected by the mandatory requirements.

Resurrection of a Response Analysis Survey Program

The program of visits by NSF and the Census Bureau to key RD-1 reporters, which was cancelled several years ago due to funding shortfalls, needs to be reinstated in a formal way. During these reporter contacts, the highest priority should be on understanding respondent record-keeping practices, identifying the organizational structure for reporting, and identifying the appropriate respondents within in the company. On these visits, staff should work with the

Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×

reporters to ascertain the most efficient means of obtaining the response (automated or paper) and assist them in drilling down into the data to ensure that survey definitions are in keeping with respondent record-keeping capacities.

Development of an Improved Survey Management System

The Census Bureau should be requested to develop an improved RD-1 survey management system, to include electronic documentation for managing respondent contacts and improving the capture of editing and imputation procedures both at Census Bureau headquarters and at the processing center at Jeffersonville, Indiana.

Cognitive Research into Survey and Questionnaire Design Issues

This research agenda pertains to all of the NSF surveys, particularly those collected in a web-based mode. NSF would be well advised to continue to obtain expert advice to develop a multiyear research and implementation agenda for improving questionnaire content, layout, and processes. The research agenda should focus on several issues highlighted in this interim report, including cleaning up confusing language and integration of instructions into the questionnaire instrument.

CONCLUSION

Although the focus of this interim report has been on issues of the reliability and accuracy of the statistical methodology, the panel recognizes that its study of these matters is incomplete without reference to several other sources of error and other shortcomings in the surveys. The issues of concept and definition, of timeliness, and of survey management have much to do with the overall quality of these surveys.

Likewise, the challenges posed by a changing environment for data collection, due to the growing prominence of “novel” forms of organizational arrangements for the conduct of research and development, need to be explored, as does the impact of the increasing globalization of R&D. Sectoral shifts in the focus of R&D and the influence of firm size also pose new challenges and opportunities for data collection. These issues and more will be addressed in the final report of the panel.

Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×
Page 4
Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×
Page 5
Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×
Page 6
Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×
Page 7
Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×
Page 8
Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×
Page 9
Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×
Page 10
Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×
Page 11
Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×
Page 12
Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×
Page 13
Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×
Page 14
Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×
Page 15
Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×
Page 16
Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×
Page 17
Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×
Page 18
Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×
Page 19
Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×
Page 20
Suggested Citation:"Interim Report." National Research Council. 2004. Measuring Research and Development Expenditures in the U.S. Economy: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/10963.
×
Page 21
Next: References »
Measuring Research and Development Expenditures in the U.S. Economy: Interim Report Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!