The National Science Foundation (NSF) has three general types of research programs: support for individual investigators or small groups (the dominant type); support for large groups, field operations, and centers (such as Engineering Research Centers); and support for national user facilities (such as telescopes, oceanographic research vessels, and particle accelerators). The process NSF uses to select awardees is fundamentally the same for all three modalities, with modifications as appropriate.
The type of proposals NSF receives varies greatly by field of research or education. In the research programs, NSF receives mostly solicited proposals in response to program announcements, similar to NASA Research Announcements (for laboratory research) and NASA Announcements of Opportunity (for flight missions). In these programs, proposals tend to differ greatly from each other. In the case of centers or user facilities and in Education and Human Resources, there are more targeted solicitations, so the proposals tend to look fairly similar. Once received by NSF, proposals are assigned to the appropriate program for review. Each program is headed by a program officer. Depending on their proposal volume and complexity, programs may have one, two, or even three program officers. At least one of these individuals is usually a “rotator,” a scientist or engineer from the relevant disciplinary community (usually from a university) who is serving a 1- to 3-year term at NSF. At any one time, about 40 percent of NSF program officers and division (collections of related programs) directors are rotators. One of the most important responsibilities of the program officer is to select knowledgeable reviewers for each proposal. Reviewers are also scientists and engineers outside of NSF and who have expertise in the area of the proposal or in a related area (in the case of multidisciplinary proposals). Anywhere from 6 to 10 reviewers are selected, with a minimum of 3 reviews needed to process a proposal further. Proposals average 5.5 reviews. This “peer review” is the cornerstone of NSF's proposal review process.
Reviews are done in writing, and a numerical (1 to 5) or verbal ( “excellent,” “very good,” “good,” “fair,” etc.) rating is assigned by each reviewer. In some programs, program officers use only reviews by mail (or e-mail). In other programs, mail reviews are solicited, and then a standing panel of different experts is convened on a regular basis to consider and rate the proposals yet again. The reviewers consider scientific merit and the capacity of the investigator (track record and/or potential) as the primary, but not exclusive, criteria for evaluation. In the case of centers or large user facilities, program officers may stage a site visit by a team of external reviewers to each candidate on a “short list ” of competitors. The site visit is generally used to validate and assess more fully the plans and capabilities of the proposers. Site visits are usually reserved for the largest, most complex proposals.
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 91
MANAGING THE SPACE SCIENCES APPENDIX F Science Management in Other Institutions NATIONAL SCIENCE FOUNDATION The National Science Foundation (NSF) has three general types of research programs: support for individual investigators or small groups (the dominant type); support for large groups, field operations, and centers (such as Engineering Research Centers); and support for national user facilities (such as telescopes, oceanographic research vessels, and particle accelerators). The process NSF uses to select awardees is fundamentally the same for all three modalities, with modifications as appropriate. The type of proposals NSF receives varies greatly by field of research or education. In the research programs, NSF receives mostly solicited proposals in response to program announcements, similar to NASA Research Announcements (for laboratory research) and NASA Announcements of Opportunity (for flight missions). In these programs, proposals tend to differ greatly from each other. In the case of centers or user facilities and in Education and Human Resources, there are more targeted solicitations, so the proposals tend to look fairly similar. Once received by NSF, proposals are assigned to the appropriate program for review. Each program is headed by a program officer. Depending on their proposal volume and complexity, programs may have one, two, or even three program officers. At least one of these individuals is usually a “rotator,” a scientist or engineer from the relevant disciplinary community (usually from a university) who is serving a 1- to 3-year term at NSF. At any one time, about 40 percent of NSF program officers and division (collections of related programs) directors are rotators. One of the most important responsibilities of the program officer is to select knowledgeable reviewers for each proposal. Reviewers are also scientists and engineers outside of NSF and who have expertise in the area of the proposal or in a related area (in the case of multidisciplinary proposals). Anywhere from 6 to 10 reviewers are selected, with a minimum of 3 reviews needed to process a proposal further. Proposals average 5.5 reviews. This “peer review” is the cornerstone of NSF's proposal review process. Reviews are done in writing, and a numerical (1 to 5) or verbal ( “excellent,” “very good,” “good,” “fair,” etc.) rating is assigned by each reviewer. In some programs, program officers use only reviews by mail (or e-mail). In other programs, mail reviews are solicited, and then a standing panel of different experts is convened on a regular basis to consider and rate the proposals yet again. The reviewers consider scientific merit and the capacity of the investigator (track record and/or potential) as the primary, but not exclusive, criteria for evaluation. In the case of centers or large user facilities, program officers may stage a site visit by a team of external reviewers to each candidate on a “short list ” of competitors. The site visit is generally used to validate and assess more fully the plans and capabilities of the proposers. Site visits are usually reserved for the largest, most complex proposals.
OCR for page 91
MANAGING THE SPACE SCIENCES The program officers do not formally review proposals, nor are they voting members of review panels. However, reviews and reviewers' comments are viewed as advisory to the program officer, not controlling. The reviews are given great weight, but, by design, the program officer is charged with arriving at an independent recommendation based on his or her own analysis of the merits of the proposal and taking into account broader NSF objectives. NSF program officers have considerable flexibility in making award decisions, but that flexibility is used most often “at the margin,” that is, among proposals ranked in the middle of the averaged ratings. Program officers also work hard to “stretch” their available resources by negotiating award levels for individual proposals. The written mail reviews and summaries of the panel meetings are provided to investigators after the award decision. This information is used by investigators to respond to criticisms or weak points in rewriting and resubmitting proposals. Even successful awardees find the comments useful. Program officers spend countless hours talking and visiting with applicants at every stage of the process, particularly counseling unsuccessful applicants. There is also a well-established process whereby unsuccessful investigators may ask in writing for reconsideration of their proposal by a higher NSF official if the investigator believes he or she did not receive a fair evaluation. Oversight of this process is accomplished in several ways. On a day-to-day level, no award action may be taken without concurrence by at least one supervisor at least one level higher. Perhaps more importantly, scientific approval is separate from “business” approval. The program officer cannot actually commit money. This can be done only by a grant or contract officer (a nonscientist with expertise in business and financial matters) in a separate part of NSF. Thus, scientific approval is necessary but not sufficient for the actual award to occur. In the end, however, the most effective controls are the program officer 's own integrity and the openness of the entire process to external scrutiny. In this rather loose management milieu, NSF program officers are much more independent than their counterparts at the National Institutes of Health, for example, but do not have as strong a role in decision making as program officers in U.S. military agencies. In recent years, especially in the education programs and others where large numbers of proposals are anticipated, NSF has instituted a two-tiered proposal process. The first tier consists of a relatively short “preproposal,” which is evaluated by a small number of reviewers selected by the program officer as to whether it is in the competitive range. Investigators with successful preproposals are permitted to continue to the second tier, a full proposal, which is then evaluated as above. Another strategy NSF uses is the “planning grant.” Planning grants are stand-alone competitions for modest sums (usually a total of $50,000 or less for 1 to 2 years) to allow investigators to conduct preliminary studies or build a proposal team in advance of submitting a full proposal for a much greater level of funding. The value of these approaches is evident—proposals not in the competitive range can be weeded out without expending the time and resources (on both the investigator 's and NSF's part) that the preparation and review of a full proposal entails. In addition, for the last 3 years, investigators have had the option of applying specifically for a Small Grant for Exploratory Research (SGER). These are non-renewable, 1- to 2-year awards of $50,000 or less for small-scale, high-risk research. The criteria cited most often for approval were “untested or novel idea” or “severe time urgency with respect to collection of data.” The proposals are very short, and the program officers do not seek external reviews. This program was instituted to counter criticisms of the traditional process as being too “picky,” risk-averse, and conservative. On average, program officers employ 1 to 3 percent of their budgets for SGER proposals (the upper limit is 5 percent). NSF has two basic types of awards: standard grants and continuing grants. Standard grants are generally used for individuals and small groups. They may be made for multiple years (usually 2), but the total funding is essentially approved and committed at the beginning of the award. Continuing grants are also made for multiple years (3 years or longer), but the funding is approved and committed annually and is subject to satisfactory progress toward the stated goals of the project. For example, in the case of
OCR for page 91
MANAGING THE SPACE SCIENCES Engineering Research Centers, initial awards are made for 5 years with annual reviews and a major review after 3 years that determines whether the award will be renewed noncompetitively for an additional 3-year period. Unlike virtually any other agency, NSF conducts no research or development and neither constructs nor operates any research facilities itself, that is, with NSF employees. NSF does own a number of major research facilities, but it operates them by using contractors, which are almost exclusively universities or university consortia. NSF monitors and often manages the processes used by these contractors to provide access to these facilities to the broader scientific and engineering communities. The performance of these contractors is periodically reassessed, and the operations contracts are periodically subject to recompetition. NSF's approach to these contracts is performance-based. That is, goals, objectives, and performance are specified in the proposal solicitation—the detailed methodology is up to the proposers. The “science effectiveness,” or scientific benefits for the dollar, of the proposer's approach is an important criterion for evaluation. As noted above, NSF relies heavily on the participation of external scientists and engineers in the proposal review process. It is worth spending a moment to note the involvement of the external scientific community in NSF priority setting and program planning. Program officers, especially rotators, play a significant role in identifying areas of scientific opportunity (and, therefore, ripe for additional funding) in their respective disciplines. The rotator system was specifically designed to accomplish this. All Directorates, which are groups of programs covering broad areas of science, engineering, and education, have advisory committees of external scientists and engineers. These advisory committees assist the Directorate leadership in setting overall programmatic priorities and in reviewing the balance between the three modalities of support. The advisory committees are also playing an increased role in the evaluation of NSF programs—a key area of emphasis in recent years. At the highest level, the National Science Board (NSB), a Presidentially-appointed group, meets several times a year to approve NSF budget requests to the President, all major policies and new programs, and major (multimillion dollar) awards. The NSB also works with the NSF Director to set the overall strategic direction of the agency. NSF is the only federal agency governed by a board in the manner of a private grant-making organization. NATIONAL INSTITUTES OF HEALTH The National Institutes of Health (NIH) is the largest single supporter of biomedical research in the world. Its mission is to improve the health of the people of the United States and other nations through the pursuit of fundamental knowledge about the behavior and nature of living systems and the application of that knowledge to extend healthy life and reduce the burdens of illness and disability. The major avenue through which NIH pursues this mission is through the support of more than 50,000 scientists working at 1700 institutions in this country and abroad. The major structural units of NIH are its 17 national institutes, each with its own authorizing legislation and appropriation. They are referred to as “categorical institutes” because most are responsible for specific categories of disease. While there are some internal variations in the categorical institutes, there is a typical organizational pattern. Most have an “extramural program” component and an “intramural program” component, each reporting to the Institute Director. The extramural programs provide support to the external community through awards for research project grants, training grants and fellowships, specialized centers and program projects, construction and resources grants, special programs for the development and support of minority scientists, research and development contracts, and cooperative agreements. The latter two instruments permit the agency to establish the plans, parameters, and detailed requirements for the projects it wishes to support. In all programs, permanent extramural staff are responsible for the oversight and management of scientific programs in each institute. The NIH relies heavily on the national pool of scientists actively engaged in research to provide
OCR for page 91
MANAGING THE SPACE SCIENCES advice on research directions and opportunities, to evaluate the scientific and technical merit of proposed research projects, and to select those meritorious projects that are likely to advance the goals of the institutes. This evaluation and selection process is achieved through the “peer review system.” Since extramural awards constitute about 84 percent of the NIH budget, which in turn accounts for about one-third of the nation's investment in biomedical and behavioral research, the impact of the peer review system is fairly large in shaping scientific programs and progress for NIH and the nation. The NIH Peer Review System is based on two sequential levels of review. The first level involves the evaluation of the scientific and technical merit of applications by knowledgeable experts who are, for the most part, nonfederal scientists. They are aggregated into legislatively mandated initial review groups (IRGs) established according to scientific discipline or current research area. Most IRGs or study sections are managed by the Division of Research Grants (DRG), reflecting a long-standing policy to have organizational separation between the awarding institute and the initial review process. Each study section is managed by a scientific review administrator (SRA) and meets three times each year to discuss and evaluate the applications assigned to it. All members of a study section are sent applications by the SRA well in advance of each meeting and are expected to read and become familiar with each application. Two or more members are assigned to each application and prepare independent, detailed reviews based on well-established criteria. Based on these evaluations, the study section may either not recommend the application for further consideration, defer for additional information, or approve the application for the time and amount requested or with appropriate reductions in these areas. Approved applications are assigned a scientific merit rating ranging from 1.0 (outstanding) to 5.0 (acceptable). Following the meeting of an IRG, the SRA prepares a summary statement for each application. These summaries describe the proposed research, the reasons for the recommendations, and priority scores for applications that are scored. These statements are forwarded to the appropriate institute or center for the next level of review. The summary statement is automatically sent to the principal investigator of each application. The second level of review is conducted by National Advisory Councils composed of both scientists and the general public. They not only assess the scientific merit of the applications but consider the relevance of the proposal to the programs of the Institute or Center. Extramural program staff attend study section meetings as program resource persons and present the applications for which they are responsible to their institute's Advisory Council. SRAs attend Council meetings to provide information needed in support of the recommendations of the IRG. Council members review the summary statements received in advance of the three meetings held each year, and, in consideration of both the recommendation of the IRG and the programmatic priorities of the institute, may accept or modify the recommendation of the IRG. When there is disagreement with the IRG recommendation, the Council may recommend that the application be returned for additional review by the same or another IRG. If upon re-review the IRG recommendation remains the same, the action of the IRG stands. The NIH Councils are unique in that, by legislative mandate, an award of a grant can be made only if approved by a Council. This feature greatly reduces the likelihood that an award will be made on the basis of political or other considerations. Among the various instruments of support, research project grants representing investigator-initiated research continue to receive NIH's highest priority. While they are not solicited, researchers are informed about the program areas of special interest to an awarding institute through program announcements or a request for applications in specific areas at a specified funding level. For the research grant mechanism, NIH is essentially a patron providing assistance and encouragement. As an indication of the magnitude of support for research project grants, $7.5 billion was expended for this mechanism in FY 1994, with most in the research project grant category. These grants support individual projects, each with a senior principal investigator. A total of 24,964 awards were made in this category in FY 1994. Initial Review Groups may be established by an institute to review certain types of grant applications, such as those for large multifaceted program projects and centers, institutional fellowships, academic
OCR for page 91
MANAGING THE SPACE SCIENCES awards, conference grants, minority programs, and resource grants, among others. Contract proposals are generally solicited to produce a specific service or product. Institutes also draw from the national pool of scientists in constituting Program Advisory Committees to advise on specific programs and future research needs and opportunities and to identify and evaluate future extramural initiatives. All of the NIH institutes, except one, have “intramural research programs.” These, collectively called the NIH Intramural Research Program (IRP), constitute a federal laboratory and represent the largest biomedical research enterprise in the world. They represent, however, only a small fraction of the NIH budget, a little over 11 percent. A unique feature of the IRP is that its scientists do not have to compete for grants in the same manner as extramural scientists. However, there is a parallel, retrospective review of accomplishments conducted by Boards of Scientific Counselors. Each laboratory and its permanent investigators, including junior staff who are not “tenured,” is reviewed at least every 4 years. The results of this review are submitted to the Institute Scientific Director, the Institute Director, and the NIH Deputy Director for Intramural Research for appropriate action. These recommendations are discussed by a Board composed of all Scientific Directors at NIH. In addition to the possibility for long-term and stable research support, intramural scientists have the availability of the NIH Clinical Center's facilities for patient investigations. The passage of the Federal Technology Transfer Act has resulted in the development of more than 200 Cooperative Research and Development Agreements (CRADAs) between intramural scientists and industry. Organizationally, the Director of NIH presides over a large and complex organization in which major authority resides in the Institute, Center, and Division (ICD) Directors. They manage their programs on a day-to-day basis with significant autonomy, but within the framework of legislative mandates, regulatory requirements, corporate policies and procedures developed by NIH, and a very strong and effective advisory system. The NIH Director has his or her own Advisory Committee, which advises him or her, and ultimately the Secretary, DHHS, on policy matters pertinent to the NIH mission. The coordination and management of the extramural and intramural programs of NIH are accomplished both at the level of the Director, NIH, and at the individual institutes by means of an elaborate but highly effective matrix-type organizational system. DEPARTMENT OF ENERGY The Department of Energy has very extensive scientific research programs in both basic and applied areas. This research is carried out in the DOE's national laboratories as well as in the university community. The assignment of these programs differs from area to area. Most of the basic research efforts, make use of large facilities at the national laboratories, such as particle accelerators, nuclear reactors, synchrotron light sources, and electron microscopes. These facilities and a portion of the research that utilizes them are located at the national laboratories. The facilities are, however, operated for the benefit of the user community—scientists at universities, other laboratories, and industry. In general, the laboratories carry out only a minimal research effort in “small science,” which is the provenance of the university scientists. In applied research (energy programs, environmental cleanup, and defense programs), the work is generally carried out at the laboratories, with a fraction increasing in the order of the above list. Peer review is extensively used in evaluating the research in the basic areas, but less so in applied programs. The budget decisions are made by the program officers in DOE headquarters. DOE differs from NASA in that the national laboratories (the DOE equivalents of the NASA Centers) are all government-owned, contractor-operated (GOCO) facilities like the Jet Propulsion Laboratory. There are no civil service employees at the laboratories. The laboratories that emphasize basic research in their missions are generally operated by universities or by university consortia. Others
OCR for page 91
MANAGING THE SPACE SCIENCES are operated by industrial organizations. This has been a strength of the DOE research effort and is one of the reasons for the interest shown by other agencies in the GOCO mode of operation. ADVANCED RESEARCH PROJECTS AGENCY The Advanced Research Projects Agency in the Department of Defense was chartered in February 1958 largely as a response to the orbiting of Sputnik by the former Soviet Union. This occurred early in the same year NASA was chartered by the Space Act of 1958. However, from the beginning ARPA adopted a set of management principles, an organizational structure, and an approach to formulating and carrying out its research projects and programs quite different from those of NASA. The principal difference in approach was that ARPA chose not to create a large civil service and government facilities infrastructure. Rather, it chose to operate with a small staff of entrepreneurial program managers and through existing government and private scientific and engineering resources to execute its programs. ARPA has maintained these initial approaches largely unchanged to this date and has had substantial success in promoting significant research advances in many scientific and engineering disciplines. These factors motivated the committee to examine the ARPA research management approach and to assess the applicability of its principles to NASA space science management. Explicitly, what are some of the principles and approaches that have apparently served ARPA so well in research management for the past 37 years? How has it stayed in the forefront of U.S. research accomplishments? How has it been able to renew its own internal intellectual capital? How has it been able to accomplish so much with such modest financial resources by comparison, say, with those of the military departments? How has it been able to capture the approbation of the research community, the Congress, industry, U.S. allies and enemies, and even, grudgingly, the military departments it was created to serve? Important Mission—The mission of DoD to provide for national security is the foremost responsibility of the federal government. Within that context and by charter, ARPA has the DoD mandate to work at the high-risk, high-payoff frontier of defense research science and engineering. Since risk taking is accepted as part of the culture at ARPA to achieve high payoff, some failure of individual projects to achieve their goals is expected and accepted. In contrast to the military departments, which have traditionally structured their laboratory research programs to make steady, incremental improvements to existing technologies and mission capabilities, ARPA seeks advanced concepts and technologies with potential to achieve order-of-magnitude performance improvements and increased military capability. Some of its early programs involved missile technology development, space surveillance, ballistic missile defense, high-energy laser weaponry, nuclear weapon detonation detection, advanced digital computation, computer network communications, and many other emerging technologies. Quality Science and Engineering—ARPA has been able to select projects of national importance with crucial relevance to the DoD's mission and with extremely high military impact. Because of their national importance and the frontier science they employ, these projects have attracted the most capable scientists, engineers, and project managers in the nation to participate in their execution. Since ARPA chose not to create and support institutions to pursue its objectives, its policy has been to seek out the most competent investigators to carry out its programs wherever they could be found. Dedication to this principle is probably the single most important factor in the record of success achieved in ARPA programs and in the cost-effectiveness of those achievements. A typical ARPA project may span a three-to four-year period, after which the project team may be disbanded and ARPA's obligation to support it ceases. Through this mechanism it has avoided the burden and inflexibility of a captive scientific workforce in a dynamic and changing research environment. ARPA has investigators located at universities, defense laboratories, Federally Funded Research and Development Centers, industrial
OCR for page 91
MANAGING THE SPACE SCIENCES research and development organizations, national laboratories, foreign research establishments, and private research centers, and it even relies on private individuals for some projects. Quality Research Management Staff—Historically, ARPA policy has been to provide term appointments for its staff members in order to ensure a rapid turnover of people and ideas. The staff is composed of discipline scientists and engineers from the research environments in which ARPA executes its programs as well as technically trained military officers. A typical term for an ARPA program manager has been 3 years, with 1-year extensions at the director's discretion. Recently, ethics in government legislation has made it increasingly difficult to recruit and retire ARPA program managers according to this policy. Motivation for ARPA program managers to serve a term managing programs there is threefold. First, there is little bureaucracy in this small organization, and dynamic technical entrepreneurs operate best in such a setting. Second, each program manager has an average of $10 million to $20 million per year during his or her term to pursue the programs agreed upon with the director. Most have not had such large discretionary research funds available to them to achieve their goals in the past. Third, there is a great deal of discretion given to the individual program manager as to how he or she will pursue his or her program. Oversight is minimal, but accountability for achieving results is substantial. Program Selection Process—Military problems are selected in areas laid out by guidance from the director. Counsel is sought with military combatant commands to determine and study the most pressing military problems that may yield to technical advances. Concept studies are commissioned, and discipline scientific and engineering advice is sought from highly qualified scientists and engineers both through informal contacts and through workshops and public solicitations for ideas. From these raw materials, research programs and projects are devised by the program manager for execution. Approval is requested from the director with an abbreviated formal program description and an oral report. Proposals are solicited using a variety of formal and informal mechanisms. Except on very large programs, where the director takes a direct role in the procurement, the program manager is most often the selecting authority. Selections can be competitive or often sole source. Although this process is often much more informal than typical government procurements, it contains the essential ingredient of substantial consultation with the technical community before a program is formulated. This results in a high probability that the best ideas have been exposed to the program manager before he or she structures the programs. Ingredients of Success—ARPA has been widely accepted as one of the nation's most successful research and development organizations. It has a reputation both here and abroad as an instigator of some of the country's most important achievements in advancing both military and commercial technologies. Two features stand out as the ones most responsible for that measure of success. One is the importance of its mission within the national security community and the willingness of Congress to fund its mission so generously down through the years. The other is the high quality of the technical staff, which has been engendered by ARPA personnel policies and the opportunity and excitement of the research environment there.