Because S&T includes but is not limited to R&D,8 the focus of this chapter is on indicators of foreign direct investment in R&D and trade in knowledge-intensive services. Measurement of intangible assets also is touched upon, although the panel does not view the development of such measures as more appropriate for NCSES than for the Bureau of Economic Analysis.
Comparability is a universal challenge for statistics and for indicators based on those statistics. The comparability of data can be affected by the survey techniques used to collect the data and the conversion of the data into statistics through the use of weighting schemes and aggregation techniques. These problems are amplified when statistics are used to create indicators, as the indicators may be a combination of statistics (e.g., an average, a sum, or a ratio) with different comparability problems. In addition to the international or geographic comparison of indicators that describe an aspect of a system (e.g., R&D as a percentage of gross domestic product [GDP]), there are problems with intertemporal and intersectoral comparisons. Users of indicators need to recognize that all statistics and indicators have a margin of error beyond which they should not be pushed. The problem is growing as response rates to official surveys continue to decline.
International comparisons entail fundamental issues such as language (e.g., the Japanese term for “innovation” is actually closer to what most Americans think of as “technology”), and NCSES is to be congratulated for supporting a project with OECD and the European Union (EU) on the cognitive testing of survey questions in multiple languages. Differences in institutions (e.g., the accounting for the European Union Framework program across EU member states) pose problems, as do cultural differences (e.g., the Nordic world has access to “cradle to grave” linked microdata on individuals) and differences in governance structures (e.g., the importance of subnational R&D programs in some countries). These differences can limit comparability and increase the margin of error that should be applied to international comparisons of statistics and indicators.
In the area of S&T indicators, a number of key comparability problems are well known. OECD compiles S&T statistics, monitors the methodology used to produce them, and publishes international comparisons and has documented the problems summarized below.
Research and Development9
Each country depends for its R&D data on the coverage of national R&D surveys across sectors and industries. In addition, firms and organizations of different sizes are measured, and national classifications for firm sizes differ. Countries also do not necessarily use the same sampling and estimation methods. Because R&D typically involves a few large organizations in a few industries, R&D surveys use various techniques to maintain up-to-date registers of known performers. Analysts have developed ways to avoid double counting of R&D by performers and by companies that contract with those firms or fund R&D activities of third parties. These techniques are not standardized across nations.
R&D expenditure data for the United States are somewhat underestimated for a number of reasons:
Allocation of R&D by sector poses another challenge to the comparability of data across nations. Using an industry-based definition, the distinction between market and public services is an approximate one. In OECD countries, private education and health services are available to varying degrees, while some transport and postal services remain in the public realm. Allocating R&D by industry presents a challenge as well. Some countries adopt a “principal activity” approach, whereby a firm’s R&D expenditures are assigned to that firm’s principal industrial activity code. Other countries collect information on R&D by “product field,” so the R&D is assigned to the industries of final use, allowing reporting companies to break expenditures down across product fields when more than one applies. Many countries follow a combination of these approaches, as product breakdowns often are not required in short-form surveys.
definition of S&T are “scientific and technological services” and “scientific and technological education and training,” the definitions of which are found in United Nations Educational, Scientific and Cultural Organization (1978).
8The OECD Frascati Manual (OECD, 2002, p. 19) notes that “R&D (defined similarly by UNESCO and the OECD) is thus to be distinguished from both STET [scientific and technological education and training] and STS [scientific and technological services].” The Frascati definition of R&D includes basic research, applied research, and experimental development, as is clear from NCSES’s presentation of the definition in the BRDIS for use by its respondents.
9This description draws heavily on OECD (2009, 2011) and Main Science and Technology Indicators (MSTI) (OECD, 2012b).
10NCSES reports state R&D figures separately.
11In general, OECD’s reporting of R&D covers R&D both in the natural sciences (including agricultural and medical sciences) and engineering and in the social sciences and humanities. A large number of countries collect data on R&D activities in the business enterprise sector for the natural sciences and engineering only. NCSES does report data on social science R&D.