2

Concepts and Uses of Indicators

 

THE ROLE OF INDICATORS

There are myriad descriptions and definitions of indicators—their composition, use, and limitations. The National Center for Science and Engineering Statistics (NCSES) defines an indicator as “a statistical proxy of one or more metrics that allow for an assessment of a condition.”1 Indicators allow one to assess the current status of a project, program, or other activity and how far one is from targets or goals. In many circumstances, an activity is not directly measurable, and therefore indicators provide analytically proximate values that are based on expert judgement.

Indicators of science, technology, and innovation (STI) often substitute for direct measures of knowledge creation, invention, innovation, technological diffusion, and science and engineering talent, which would be difficult if not impossible to obtain. Techniques are improving for obtaining data that directly measure innovation activities, and these data are already being used to complement indicators that are derived from traditional methods. STI indicators, however, will still have an important role to play in informing policy decisions, especially if they are based on tested analytical frameworks.

USES AND DESIRABLE ATTRIBUTES OF INDICATORS

STI indicators are often used to relate knowledge inputs to outputs, outcomes or impacts. At a very basic level, knowledge inputs include years of schooling, level of degree, and the amount of training an employee receives on the job. Outputs are specific products, processes, or services. Outcomes and impacts are the near-term and longer-term effects and ramifications to the economy or society in which the technological ecosystem operates.2 Indicators are relied on for both post-activity evaluations and analysis prior to an activity, although there are major limitations in using STI indicators for predictive exercises. Foresight is often the best that can be asked of indicators. A comprehensive review of the use of STI indicators for policy decisions is

____________

1Definition from NCSES (personal communication, 2011). In that communication, NCSES also provided the definitions of “data” and “metric”: data— information in raw or unorganized form that represents conditions, ideas, or objects; metric—a systematic measurement of data.

2For example, scientific advancement in detecting and removal of pathogenic microorganisms lead to technological mechanisms that in turn lead to cleaner water, thereby increasing productivity (through a healthier workforce) and hence increasing inputs in the production of goods and services, as well as increased welfare of citizens.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 11
2 Concepts and Uses of Indicators THE ROLE OF INDICATORS There are myriad descriptions and definitions of indicators—their composition, use, and limitations. The National Center for Science and Engineering Statistics (NCSES) defines an indicator as “a statistical proxy of one or more metrics that allow for an assessment of a condition.”1 Indicators allow one to assess the current status of a project, program, or other activity and how far one is from targets or goals. In many circumstances, an activity is not directly measurable, and therefore indicators provide analytically proximate values that are based on expert judgement. Indicators of science, technology, and innovation (STI) often substitute for direct measures of knowledge creation, invention, innovation, technological diffusion, and science and engineering talent, which would be difficult if not impossible to obtain. Techniques are improving for obtaining data that directly measure innovation activities, and these data are already being used to complement indicators that are derived from traditional methods. STI indicators, however, will still have an important role to play in informing policy decisions, especially if they are based on tested analytical frameworks. USES AND DESIRABLE ATTRIBUTES OF INDICATORS STI indicators are often used to relate knowledge inputs to outputs, outcomes or impacts. At a very basic level, knowledge inputs include years of schooling, level of degree, and the amount of training an employee receives on the job. Outputs are specific products, processes, or services. Outcomes and impacts are the near-term and longer-term effects and ramifications to the economy or society in which the technological ecosystem operates.2 Indicators are relied on for both post-activity evaluations and analysis prior to an activity, although there are major limitations in using STI indicators for predictive exercises. Foresight is often the best that can be asked of indicators. A comprehensive review of the use of STI indicators for policy decisions is 1Definition from NCSES (personal communication, 2011). In that communication, NCSES also provided the definitions of “data” and “metric”: data— information in raw or unorganized form that represents conditions, ideas, or objects; metric—a systematic measurement of data. 2 For example, scientific advancement in detecting and removal of pathogenic microorganisms lead to technological mechanisms that in turn lead to cleaner water, thereby increasing productivity (through a healthier workforce) and hence increasing inputs in the production of goods and services, as well as increased welfare of citizens. 11

OCR for page 11
found in Gault (2010), who outlines four ways that indicators are used for policy purposes: monitoring, benchmarking, evaluating, and “foresighting.”3 At the panel’s workshop, several presenters described attributes of indicators that NCSES should keep in mind as it develops new STI indicators. One important desirable attribute that was emphasized is a low sensitivity to manipulation. In addition, STI indicators are like baseball statistics—it is unlikely that one single statistic tells the whole story. Instead, users will need to rely on a collection or suite of indicators. Mindfully, during the workshop, Hugo Hollanders, of UNU-MERIT,4 stated that there is both political and media appeal of composite indices.5 Other ideal characteristics of indicators that workshop participants mentioned included scientifically derived/evidence based; comparable across regions; powerful for communication; affordable; accessible; scalable; sustainable; and policy and analytically relevant. STI indicators need to be policy neutral, even though the particular ones selected may reflect the preferences of the stakeholders who request them. Although the production of indicators across many fields has an established history, there are at least three major cautions regarding their use that are important to note.  First, indicators can send mixed signals, which require expert judgment for interpretation. For example, increased innovation—which is key to advancing living standards—is often considered to enhance job creation. Policy makers discuss spurring innovation as a job-creation tactic. However, innovation can lead to fewer jobs if the process or managerial expertise increases efficiency. Short-term displacement of workers in one industry or sector can be counterbalanced in the longer term by development of new products, services, and even sectors and by increased market demand if process efficiencies drive down prices (see Pianta, 2005; Van Rennen, 1997). One way to be cautious about mixed signals is to develop STI indicators that support analysis of time scales, sectors, and geographic locations.  Second, a given metric, once it becomes widely used, changes the behavior of the people and practices it attempts to measure. The worst thing a metric can do is not just to deliver a bad (i.e., misleading) answer, but to incentivize bad practice (see, e.g., West and Bergstrom, 2010). It is important that indicators avoid sending distorted signals to users.  Not everything that counts can be counted, and not everything that can be counted counts (an idea attributed to Albert Einstein). It seems clear that some outcome measures that reflect the importance of research and development (R&D) and 3For example, at the panel’s workshop Changlin Gao reported that China is targeting its STI indicators’ program on the four broad measures: (1) monitoring (international innovation system, linkages within and between national innovation systems; regional innovation systems and industrial clusters; firms; innovation; the implementation of national S&T projects; the selected quantitative indicators in the S&T development goals); (2) evaluating (performance of public investment on S&T; performance of government research institutes and national labs; national S&T programs; specialization of S&T fields; advantages versus disadvantages; new emerging industries, such as information technology, biotechnology industry, energy, health, knowledge based services, and etc.); (3) benchmarking (international benchmarking; interprovincial benchmarking); and (4) forecasting (the latest data not available in gathered statistics). 4 UNU-MERIT—the U.N. University Maastricht Economic and Social Research Institute on Innovation and Technology—is a research and training center of the United Nations University and works in close collaboration with the University of Maastricht. 5 To clarify, the panel is not advocating that NCSES develop a “headline indicator.” A suite of key STI indicators should be more informative for users of the statistics. 12

OCR for page 11
innovation to society are illusive. For example, social well-being is difficult to measure, yet one of the key interests of policy makers is the return on investment of public funding for science and technology for the good of society. BEYOND SCORING TO POLICY RELEVANCE An important aspect of the charge to this panel is the assessment of the utility of STI indicators. Although the National Science Foundation (NSF) does not do policy work, the statistics that NCSES produces are often cited in debates about policies regarding the science and engineering enterprise. For instance, the American Association for the Advancement of Science (AAAS) annually prepares a report giving various breakdowns of R&D expenditures in the federal budget. These data are informed by NSF’s publications, National Patterns of R&D Resources and Federal Funds for Research and Development. In the latest report (American Association for the Advancement of Science, 2011), NSF data are used to show the role of innovation in productivity growth and how innovation affects the quality of life. The Congressional Research Service (CRS)6 regularly refers to the National Science Board’s Science and Engineering Indicators (SEI) biennial volumes (see National Science Board, 2010), which are prepared by NCSES.7 The online version of SEI also has a sizable share of users outside the policy arena and outside the United States. There are several highly influential reports each year that rely on NCSES indicators to relate scientific inputs to socioeconomic outcomes. The final report of this panel will contain a comprehensive representation of the policy relevance of STI indicators. In the course of its work to date, the panel queried a variety of users, including policy makers, government and academic administrators, researchers, and corporate decision makers in high-tech manufacturing and service industries. We also sought input from developers of STI indicators and from individuals who are called on by policy makers to do assessments of high- tech sectors in the United States and abroad. This input yielded dozens of questions that STI indicators could address. From the extensive list of questions and issues we received, the panel distilled eight key issues that are expected to be prominent in the minds of decision-makers for the foreseeable future: growth, productivity and jobs; STI activities; STI talent; private investment, government investment and procurement; institutions, networks, and regulations (including intellectual property protection and technology transfer); global STI activities and outcomes; subnational STI activities and outcomes; and systemic changes on the horizon. Box 2- 1 shows the questions that flow from these issues. Although the policy relevance of the STI indicators is of primary importance for the panel’s work, the recommendations here and in the final report will address fundamental aspects of monitoring and benchmarking that are of broader interest. 6 See National Research Council. (2011, p. 86): “In meeting the requirements of Congress for objective and impartial analysis, CRS publishes periodic reports on trends in federal support for R&D, as well as reports on special topics in R&D funding.” 7 The National Science Board released the SEI 2012 on January 18, 2012. The chapter topics are unchanged in the new edition. 13

OCR for page 11
BOX 2-1 Key Issues and Questions for STI Indicators  Growth, Productivity and Jobs What is the contribution of science, technology, and innovation (STI) activity to productivity, employment and growth? What is the relative importance of technological innovation versus non-technological innovation for economic growth? Is the United States falling behind with respect to innovation and what are the effects on socioeconomic outcomes?  STI Activities What are the drivers of innovation? How influential is R&D for innovation and growth (by sector)? What would constitute a “balance” between the biological and physical sciences? On what basis could that be determined? Does biological science depend on physical science for advancement? How important are the following for advancing innovation: small businesses, large businesses, strategic alliances, technology transfer between universities and firms, academic researchers, government labs and procurement activities, and nonprofit organizations? What are the emerging innovative sectors and what is unique about them?  STI Talent How much knowledge capital does the United States have? How many people, possessing what kind of skills, are needed to achieve a robust STI system? What additional sources of “talent” can best be tapped—especially among immigrants, women, and minorities? How many science and engineering doctorate holders took nontraditional pathways into the science, technology, engineering and mathematics (STEM) workforce? Did this vary by race/ethnicity, gender or the existence of a disability? How important are community colleges in developing human resources for STEM talent? Is the U.S. falling behind in STEM workers? What fields other than science, technology, engineering, and mathematics are important for advances in STI?  Private Investment, Government Investment and Procurement What impact does federal research spending have on innovation and economic health, and over what time frame? How large should the federal research budget be? How should policy makers decide where to put additional research dollars or reallocate existing funding streams—information and communications technology, biotechnology, physical science, nanotechnology, environmental technology, social science, etc.? Does government investment crowd out or energize private investment STI activities? What is the role of entrepreneurship in driving innovation?  Institutions, Networks and Regulations What impacts are federal research programs having on entrepreneurial activities in science and engineering sectors? Where are the key gaps in the transfer of scientific and technological knowledge that undercut the performance of the STI system? Where is the supposed “valley of death” in innovation? In which industries is the valley of death most prevalent? What part of the process is underfunded for specific sectors? What is the nature and impact of intellectual property protection on scientific and innovation outputs?  Global STI Activities and Outcomes What can we learn from other countries and what are other countries learning from us? In what technological areas are other countries accelerating? What impact does the international flow of STI have on U.S. economic performance? What is the relative cost of innovation inputs in the U.S. versus other countries? Where are multinational corporations sourcing R&D? What are the institutional differences that affect innovation activities among nations and how are they changing?  Subnational STI Activities and Outcomes How does innovation activity in a given firm at a given place contribute to that firm’s productivity, employment and growth, and perhaps also to these characteristics in the surrounding area? How are those innovation supply chains working within a state? Are firms principally outsourcing new knowledge from customers or from universities?  Systemic Changes on the Horizon How will demographic shifts affect the STEM workforce, nationally and internationally? Will it shift the locus of the most highly productive regions? Will global financial crises slow innovation activities or merely shift the locus of activities? When will emerging economies be integrated into the global ecosystem of innovation and what impact will that have on the system? How are public views of science and technology changing over time? 14