rigorous standards of definitions for these terms. NCSES’s role in the working group of National Experts on Science and Technology Indicators (NESTI) of the Organisation for Economic Cooperation and Development (OECD) gives the agency good opportunity over time to establish clearer definitions as revisions are made to the Frascati manual on R&D (Organisation for Economic Co-operation and Development, 2002), the Oslo manual on innovation (Organisation for Economic Co-operation and Development-Eurostat, 2005), and, possibly, the Canberra manual on human resources (Organisation for Economic Co-operation and Development, 1995). Although there is no mandate to review and revise the Community Innovation Survey (CIS), the OECD has received recommendations on how to better design innovation questions. This will have implications for CIS, BRDIS and other innovation surveys internationally. Several things affect the lack of comparability of United States and European data on the innovativeness of firms—the framing effect of using a lengthy R&D survey, sampling errors, and weighting issues, to name a few. NCSES and the OECD are actively collecting evidence to assess what factors may drive biases in international comparisons.

Since 2008, the National Science Foundation (NSF) has collected data on product and process innovation from the Business Research and Development and Innovation Survey (BRDIS). NCSES augmented its R&D survey to measure innovation activities, allowing for comparisons of innovation statistics across several countries.1 Although the 2008 BRDIS was a pilot survey, it did yield some data on the incidence of product and process innovation among firms by sector (including services), size class, and whether or not respondents reported R&D activity.2 BRDIS questions on innovation were augmented in the 2009 and 2010 versions of the survey; the 2011 version is currently under development.3 NCSES endeavors to gather more information on innovation activities, going beyond simple “yes/no” questions on whether a firm introduced or significantly improved goods, services or processes. These efforts have also included attempts to develop more comparability to key questions in the CIS and to ensure that the innovation questions are answered by firms that do not do R&D. Comparability of the BRDIS and CIS data also depends on surveying similar populations of firms and the techniques used to derive estimates from the data. BRDIS is still a work in progress. Complete cognitive testing of the innovation questions still remains to be done in both the United States and Europe. Nevertheless, the data are useful for preliminary snapshots.


One impediment to understanding and assessing the country’s innovation is the lack of comparability of U.S. STI indicators with those developed by other OECD nations and other countries. NCSES should develop more useful indicators of innovation—an outcome measure. The biennial publication of the National Science Board, Science and Engineering Indicators (SEI), now includes information on a range of factors, including science, technology, engineering, and mathematics (STEM) graduates, R&D, and patents, but these are intermediate inputs into or proxies for innovation. They are not indicators of innovation itself, which is the


1It is widely known that the innovation statistics from BRDIS and the CIS lack comparability: see an explanation in Hall (2011, fn. 4).

2See InfoBrief 11-300 (October 2010): available: http://www.nsf.gov/statistics/infbrief/nsf11300/ [December 2011]. The data are based on the 2008 BRDIS, which was launched in January 2009.

3Tabulations on the 2009 data are due to be released in mid-2012, with an InfoBrief on the 2009 BRDIS data scheduled to be released in December 2011. The 2010 BRDIS data were still being collected in November 2011.

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement