Committee members reviewed each country’s S&T plans alongside its observed S&T progress and common indicators to identify the technology areas that could have the greatest impact. These are addressed in the country-specific sections in Chapters 3 through 8. Where possible, the committee tried to assess whether there are better indicators of progress for each individual country and whether each country has the resources to make significant or economic advances in the identified key technology areas. A consequence of the focus on country-specific S&T strategies, rather than on particular areas of S&T, is that coverage in the report of important S&T developments is uneven across the countries.

METHOD OF INFORMATION GATHERING AND EVALUATION

Information was gathered by reading available S&T plans from the countries of interest, listening to experts’ presentations on S&T focus areas from the six countries and the United States, and reading other publicly available documents related to the S&T enterprises of the six countries. The committee reviewed rankings, indices, and trends, compiled by other organizations, of items such as patents filed, journal articles published, degrees earned in S&T fields, and funding devoted to research and infrastructure. The committee also reviewed the countries’ stated S&T policies, as published in their S&T plans, and S&T spending, when possible. In all cases, funding numbers primarily reflected nonmilitary R&D spending. References are included in the applicable country sections.

The committee focused not only on gathering information, but also on assessing and interpreting the information to how S&T are being developed in other countries and what existing or emerging technologies may pose threats to U.S. security. In order to evaluate the information collected, the committee members relied on their own expertise, experience, personal conversations with experts, and judgment.

One evaluation method was to compare knowledge of a country’s S&T plans with information regarding its S&T spending. Measurements of investment in R&D as a percentage of gross domestic product (GDP) can indicate the level of national commitment to S&T development. If the spending for an area is not sufficient to support stable R&D (or progress in that area), then that area may actually be a low priority for that country, even if the policy states otherwise. Although useful, such measures offer an incomplete portrait of available resources; they do not reflect the cost differences between local economies, the impact of technology transfer, or the level of innovation in a given country. An effort was made to identify indicators that capture the major S&T-related trends in each country. Commonly used indicators include metrics such as patents, publications, degrees, and spending. However, interpreting these data in the context of a country’s unique and evolving circumstances presents a considerable challenge. Traditional academic and economic measures may not be reliable in some cases because a country’s standards or its level of involvement in the global economy or the Western academic establishment may differ. For example, academic advancement in China is linked to the number of papers published, incentivizing scientists to produce a large number of lower quality papers that are not necessarily indicative of more (or high-quality) research. One potential solution to this problem is to focus on communities of researchers as demonstrated by co-authorship rather than on the number of individual papers (Klavans and Boyack, 2009).

The World Bank has developed a benchmarking tool called the Knowledge Assessment Methodology (KAM) to help countries identify challenges and opportunities. The KAM is used to rate countries on 83 variables that have been identified as necessary elements of a functioning knowledge economy, ranking each on a scale from 1 (weakest) to 10 (strongest). Figure 2-1 shows the KAM innovation scorecards for the JBRICS countries on several traditionally cited indicators and compares them with that of the United States.

Economic measures are designed to measure patterns in output of goods from material resources. They do not translate well to measuring innovation, which is not easily quantifiable. Unlike in manufacturing, health policy, or education policy, where records can be collected and analyzed, there are no reliable data on the factors that produce innovation or encourage its adoption. Additionally, research on the impacts of organizational structure and decisionmaking is very limited. R&D investment, often used to predict industry output, may be an unreliable indicator—an increase in R&D spending does not always increase output or improve other indicators. For example, between 1992 and 1995, a series of R&D-focused recovery packages in Japan failed to reverse a decline in industrial R&D, due to policies that discouraged innovation and university-industry collaborations (OECD, 2009). Sweden, too, has seen little growth in recent years despite population growth and heavy investment in R&D (Lane, 2009).



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement