Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 9
--> instrumentation. These three surveys are taken in different years and, therefore, provide a blurred rather than a focused snapshot of the distribution of research infrastructure in the behavioral and social sciences. They suggest that the behavioral and social sciences have received about 2 percent of the total research expenditures for facilities, equipment, and instrumentation in the early 1990s: see Table 1. It is impossible to say whether that is an appropriate percentage without relating current levels with needs in the behavioral and social sciences and then doing a similar analysis across the other sciences. But it is possible to suggest that survey data that could be aggregated to give more focused details on the facilities, instruments, and instrumentation resources in the behavioral and social sciences would help make the future decision process on research infrastructure more informed. The fragmentation of information on infrastructure investments in the behavioral and social sciences is exacerbated by the lack of any comprehensive data on data infrastructure itself. The biggest data collector in the country is the federal government because of the critical importance of public information and data required to organize and manage a modern government, economy, and society. NSF has a small but essential role in developing this behavioral and social science infrastructure by supporting innovative datasets, especially prototypes and new statistical measurement research. NSF also encourages research data collection that other government agencies would not fund: such investment exploits the original federal investment in data and provides intellectual leadership to the federal government and the research community. A systematic collection of data on infrastructure investments in data itself would be important to help NSF/SBE to make decisions on allocating their resources for research infrastructure most effectively. Currently, NSF/SBE's investment in all infrastructure activities represents 15–17 percent of the NSF social, behavioral, and economic research budget (Butz, 1997). The Directorate has publicly asked whether this is the right percentage: many workshop participants felt--and CBASSE members agree--that this is not the right question. There are several necessary prior questions: What is the current infrastructure doing? How are the sciences developing? Where are the research opportunities? How much infrastructure is needed to address the most important scientific questions. This report does not attempt to answer these questions; it does recommend a process by which they can be realistically addressed. Selection Process for Research Infrastructure Projects Current Process and Possible Modifications The current selection process for infrastructure proposals is virtually the same as that for individual investigator proposals at NSF: see Exhibit 1. The key fact is that proposals for research infrastructure compete directly with other proposals for research (on the same basis and at the same time), although there are systematic differences between infrastructure and
OCR for page 10
--> TABLE 1 Expenditures for Research Infrastructure for Social Sciences and Psychology as a Percentage of Expenditures for Research Infrastructure Expenditures for all Sciences, circa mid-1990s Category of Infrastructure Social Sciences Psychology Public and private research and development expenditures at academic institutions (1994)a 5.4% 2.0% Research facilities expenditures (1996–1997)b 1.8% 1.2% Scientific equipment expenditures (1994)a 2.5% 1.5% Instrumentation expenditures (1993)c * * Total, circa 1993–1997d 1.4% 1.0% * Too small to be listed. a Data from National Science Foundation/SRS, Survey of Scientific and Engineering Expenditures at Universities and Colleges: 1994. b Data from National Science Foundation/SRS, Survey of Scientific and Engineering Research Facilities at Universities and Colleges: 1996. c Data from National Science Foundation/SRS, Survey of Academic Research Instruments and Instrumentation Needs: 1993. d This is the percentage of total expenditures for facilities, equipment, and instrumentation as reported in the above surveys that is spent on the social sciences and psychology.
OCR for page 11
--> Exhibit 1 Current Review Process for Infrastructure Proposals (and Investigator Proposals) in the Behavioral and Social Sciences at the National Science Foundation Timing Proposals are reviewed twice a year at the same time as investigator proposals. Criteria 1) What is the intellectual merit of the proposed activity? 2) What are the broader impacts of the proposed activity? Reviewers Independent external reviewers submit written reviews. Two panel members also prepare independent reviews. Advisory panels discuss independent reviews and write a panel summary. Priority Recommendation Advisory panel members base priority ranking on written reviews and panel discussions with varying kinds of possible summary ratings depending on the panels. Funding Recommendation Funding is based on the NSF program director's judgment informed by the panel and outside reviews generally. Duration 1-5 years with most grants for 3 years or under. Page Requirements 15 pages
OCR for page 12
--> investigator proposals. Those differences include three major factors: The scientific outcomes from investments in behavioral and social science infrastructure create knowledge, data, and methods that can be publicly shared at little cost among many researchers. This distinction applies both to the generation of very large sets of cross-sectional and longitudinal data and to the qualitative advances of knowledge generated in collaborative and interdisciplinary settings provided by centers, institutes, and scientific conferences. Proposals for infrastructure facilities and instrumentation are usually more costly to funders than grants to individual researchers pursuing discrete projects. (If the former were calculated on a per user basis, however, they might be a bargain.) Accordingly, the necessary collective resources for infrastructure are more difficult to mobilize than those for individual research grants. Infrastructure investments typically require a longer time to reach full productivity than investments in discrete individual-investigator research proposals. These differences helped frame the workshop discussion and CBASSE's conclusion that differences between infrastructure investments and individual research grants should be reflected in a revised selection process for behavioral and social science research infrastructure. Although infrastructure proposals are qualitatively different from investigator research proposals, they are also complementary. Many individual proposals hinge on past and current infrastructure investments, which makes the selection process crucial for future research. Workshop participants discussed the current and various possible modifications of the current process of funding research infrastructure. One approach is simply to continue the current situation, in which proposals for infrastructure compete directly with individual research grants. This inevitably raises the question: “How many individual research projects could we support if we did not support this infrastructure proposal?” This approach involves comparability difficulties because it does not recognize the differences between individual research and infrastructure activities outlined above. A second approach would have proposals for infrastructure compete among a number of existing infrastructure activities. This would not have the comparability problems between infrastructure and individual research and might seem appropriate if a ceiling on infrastructure expenditures existed. But it would create other comparability problems because of the different dimensions of infrastructure, such as new data surveys and new research centers. A third approach somewhat expands the second one: proposals for infrastructure would involve a competition among existing and proposed infrastructure activities that address similar purposes. This approach might seem appropriate if there is a ceiling on infrastructure
OCR for page 13
--> expenditures for specific purposes. But it would generate other comparability issues, for example, between existing infrastructure with considerable past investments and track records and new proposals with no historical experience. Recommended Selection Process. In the opinion of CBASSE members, behavioral and social science research infrastructure is too important to the long-term development of the sciences to continue to be funded under the current process used for individual research grants. Rather, a separate selection process for infrastructure proposals should be developed, one that is consistent with the characteristics of research infrastructure. The development of a new process would include modifications to the current funding cycles, project duration, criteria for evaluation, and selection criteria. CBASSE offers these recommended changes to the selection process for NSF's Directorate for Social, Behavioral, and Economic Sciences. Funding Cycles Workshop participants discussed the strengths and weaknesses of making decisions about infrastructure investments serially (see Suzman, 1997). Currently, individual infrastructure projects have different cycles for review, renewal, or termination; they depend on when the project was first funded. Changing the current system to recompeting all infrastructure at the same time would create an unnecessary and inefficient hiatus in research and might overload the research management or fiscal structure. Instead, there would be different timing for directly competing activities and for new activities. If some researchers or institutions want to compete explicitly with an existing infrastructure project, they would need to submit a competing proposal at the time of renewal for the existing project. New proposals that are not substantively competitive with current projects would be submitted in any funding cycle. Project Duration The current duration for individual and infrastructure research grants in NSF/SBE is 3–5 years. That span is usually not long enough to evaluate the long-term productivity of many infrastructure projects, so grants for infrastructure should be lengthened to reflect the time it takes for them to become effective investments. And since the appropriate duration of an infrastructure grant is likely to vary, no single time frame should be applied to all proposals: part of the grant application process should include the applicant's proposal for an appropriate duration for the grant and schedules for interim reviews. Reviewers of the proposal would be asked to suggest appropriate duration to the funding agency, which would make the final decision.
OCR for page 14
--> Evaluation Criteria Currently, the implicit evaluation criteria of an existing infrastructure project is whether the project is actually functioning and being used. But more detailed criteria would give both the funders and grantees more specific goals for the investment. Thus, CBASSE recommends that grant applicants be required to propose criteria for both interim and final evaluations, and reviewers should also be asked to suggest criteria. NSF/SBE should adopt evaluation criteria at the beginning of every infrastructure project to help guide a thoughtful evaluation at the end. Selection Criteria Presently there is no widely accepted way for the Federal government in conjunction with the scientific community to make priority decisions about the allocation of resources in and across scientific disciplines…. Whenever there is some amount of comprehensive coordination and decision-making, it is supremely important that the criteria of choice be appropriate. There is no virtue in doing the wrong thing efficiently. Any scheme of oversight must begin with explicit discussion of and agreement about the goals to be achieved. (National Science Board, 1998:1,7). Workshop participants discussed in some detail the need to have infrastructure proposals judged on criteria that are appropriate for each infrastructure project. They also discussed illustrative criteria for selecting infrastructure projects. In general, the participants agreed that each proposal for scientific infrastructure needs to address long-term strategic scientific questions and be able to meet technical requirements to answer these questions. Both aspects, moreover, should be outstanding to merit funding. The criteria for assessing the strategic and technical parts of the proposal should, therefore, include strategic and technical criteria. For evaluation of the strategic aspects of infrastructure projects, illustrative criteria for evaluation might include (Levine, 1997:5): -- scientific justification: How likely is it that the data will stimulate research leading to important discoveries … extending to other fields? How likely is it that the data will stimulate research leading to significant improvements or innovations in investigative methods? -- general utility. Will the knowledge forthcoming from research utilizing these data justify the large investment in collecting them? -- demand. What is the probability that the project will result in research
OCR for page 15
--> proposals to NSF? To NIH or other agencies? For evaluation of the technical aspect of research infrastructure projects, illustrative criteria might include: -- the availability of the infrastructure (whether center, equipment, or electronic network) for individual researchers in multiple disciplines and the expected rate of use of the infrastructure; -- the ability of the proposing group to administer the proposed infrastructure activity responsibly, efficiently, and effectively; and -- the relationship of the proposed project's cost to the importance of the questions to be addressed and to the costs of other infrastructure projects. Each kind of infrastructure project in the behavioral and social sciences should also meet specific criteria. For example, proposals for data infrastructure would have a necessary condition that all data be clearly documented and made publicly available. Other specific questions for data infrastructure proposals could follow guidelines previously recommended by the NRC (National Research Council, 1996): -- Are survey measurement methods relevant, valid, advanced, and novel? -- Do the data fill a niche that is unfilled by other datasets? -- Will the data be linked or integrated with other data sources? Because infrastructure organizations typically involve a social structure (laboratory, center, organized research unit), the leadership, division of labor among investigators, pattern of reporting, effectiveness of support staff, and communication capacity with other organizations should be evaluated, possibly by site visits. This dimension of institutional effectiveness has been used in the evaluation of different kinds of national laboratories, and there is good reason to believe that such questions can be valuable in evaluating other infrastructure organizations as well. Other specific criteria for evaluating organizational infrastructure projects such as research centers might include: -- scientific excellence and promise, which involves primarily, but not exclusively, the quality of the behavioral and social science personnel involved and the quality of a steering committee to oversee the major projects;
Representative terms from entire chapter: