Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 63
Page 63 APPENDIX C SUMMARIES OF AGENCY FOCUS GROUP PRESENTATIONS The following summaries are based on five focus groups held with the major research-supporting agencies during the fall of 2000 and a workshop hosted by the National Academies on December 18-19, 2000. Each focus group was attended by panel members (three of whom were also members of COSEPUP), by agency representatives who were senior research administrators responsible for GPRA compliance, and by representatives of oversight bodies (Congress, OMB, and GAO) responsible for review of GPRA performance plans and reports from research agencies. A similar agenda was followed during each focus group. The panel began by explaining its goals, agency representatives described their research programs and their mechanisms for GPRA compliance, panel members and oversight representatives commented on agency methodology, and all participants concluded by offering summary comments. The goal of each discussion was to identify aspects of the methodology that could become “best practices” for use by other agencies and areas where the methodology could be improved. After each focus group, a summary was produced that used the comments and written materials to answer the following questions: What methodology is used for evaluating research programs under GPRA? What level of unit is the focus of the evaluation? Who does the evaluation of the research program under GPRA?
OCR for page 64
Page 64 What criteria are used for the evaluation? How do the selection and evaluation of projects relate to the evaluation of the research program? How is the result communicated to different audiences (e.g., the S&T community, advisory committees, agency leadership, the administration, Congress)? How is the result used in internal and external decision-making? The workshop was structured differently for a much larger group. The first day's discussion was open to the public and attended by nearly 30 participants. The agenda included a general discussion, an overview, general comments from stakeholders and agencies, breakout sessions, a second general discussion focusing on conclusions and recommendations, panel member comments on the draft report, and a summary session. The second day of the workshop was reserved for panel members, who developed conclusions and recommendations for the report.
OCR for page 65
Page 65 APPENDIX C-1 SUMMARYOF THE DEPARTMENT OF DEFENSE FOCUS GROUP 1. What methodology is used for evaluating research programs under GPRA? 1.1 Overview. The Department of Defense (DOD) response to the Government Performance and Results Act (GPRA) is discussed in Appendix I of its Annual Report to the President and the Congress (2000). This appendix summarizes the DOD strategic plan and the ways in which the department links this plan to performance goals and evaluates the performance goals annually. Virtually all DOD's science and technology (S&T) activities fall under “Performance Goal 2.2 – Transform US Military Forces for the Future.” This transforming process is said to be achieved through the development of “new generations of defense technologies.” The strategy for achieving these new technologies involves three elements: the basic research plan (BRP), the Joint Warfighting Science and Technology Plan (JWSTP), and the Defense Technology Areas Plan (DTAP). 1.1.1 Basic research. Before World War II, the federal government spent most of its research dollars in federal laboratories. There was considerable opposition to the government's involvement in universities. This was muted by the
OCR for page 66
Page 66 arguments of Vannevar Bush, who established the conceptual framework for contractual and “unfettered” basic research. Today, DOD invests 56% of its basic research dollars in universities; 29% goes to government laboratories and 11% to industry. Bush argued that such investments in basic research are acts of both faith and patience, but the investments are justified many times over by returns of great value. DOD's BRP is described as “the cutting edge of the Defense Science and Technology Program.” This plan is realized primarily by directly funding research in universities, federal laboratories, and industry and by keeping “a watchful eye on research activities all over the world to prevent technological surprise.” The BRP contains an overview of the entire DOD research program, most of which can be described in 12 disciplinary categories. 1 Interdisciplinary research is specifically addressed under three programs. In addition, the plan covers education, training, and instrumentation. DOD supplies only about 6% of the nation's total federal funding for basic research, 2 but this effort is focused in a number of fields that are critical to the nation's scientific performance. Universities receive 22% of their basic research funding for mathematics from DOD, 42% for computer science, 71% for electrical engineering, 63% for mechanical engineering, and substantial amounts in optics, materials, and oceanography. 1 Physics, chemistry, mathematics, computer sciences, electronics, materials science, mechanics, terrestrial sciences, ocean sciences, atmospheric and space science, biologic sciences, and cognitive and neural science. 2 FY2000 federal funding of basic research by funding agency was allocated as follows: National Institutes of Health, 50%; National Science Foundation, 13%; Department of Energy, 12%; National Aeronautics and Space Administration, 13%; DoD, 6%; other, 6%.
OCR for page 67
Page 67 1.1.2 Applied research and advanced technology development. The BRP is coupled with two complementary plans that focus on applied research and advanced technology development: the Joint Warfighting Science and Technology Plan (JWSTP) and the Defense Technology Area Plan (DTAP). The JWSTP takes a joint perspective horizontally across the applied research (6.2) and advanced technology development (6.3) investments to ensure that needed technology and advanced concepts are supported. The DTAP presents the DOD objectives and the 6.2-6.3 investment strategy for technologies critical to DOD acquisition plans, service warfighter capabilities, and the JWSTP. It also takes a horizontal perspective across the Defense agencies to chart the total DOD investment for given technologies. 1.1.3 DTOs. DOD uses defense technology objectives (DTOs) to provide focus for the development of technologies that address identified military needs across the department. Each DTO identifies a specific technology advancement that will be developed or demonstrated, with expected date of availability, specific benefits resulting from it, and the amount of funding needed. The DTO process is used to comply with GPRA. The output of this process includes budget and management decisions. 1.1.4 TARA. The methodology used for evaluating S&T programs is known as technology area reviews and assessments (TARA). TARA is the department's official response to GPRA, and it is a mechanism to evaluate science and technology programs through expert peer reviews. But in this process, basic research is not isolated from applied research and advanced technology development. All three categories—6.1 (basic research), 6.2 (applied research), and 6.3 (advanced development)—are evaluated as overlapping parts of the technology area being reviewed. For example, biomedical research and chemical-biologic warfare
OCR for page 68
Page 68 research both have basic-research funding that is particular to them, but they are evaluated in their totality with clear links to what discoveries are expected. 1.1.5 Reliance. The department also uses a process called reliance to guide corporate planning and assessment. Reliance members include the deputy under secretary of defense (science and technology), representatives of all the services, and defense agencies. The objective of reliance is to coordinate the S&T process, stimulate communication among the different services and other groups, and clarify priorities. This is the vehicle for planning and overview that brings the services together. Reliance is designed to encourage collaboration and communication and prevent unnecessary duplication. The group reviews the DTOs themselves, and at the end of the review process all participants sign off on the results of their discussions, so they all have a stake in it. 1.2 What level ofunit is the focus of the evaluation? DOD evaluates its S&T activities by reviewing performance at the level of DTOs. There are approximately 400 DTOs, each of which identifies a specific anticipated technology advance, the date of availability, benefits, technologic barriers, and customer. The DTOs are supported by virtually all the S&T defense agencies and services. 1.2.1 The evaluation process. The objectives of the DTAP include creation of technologies that enhance the nation's future warfighting capability. Performance under DTAP can be evaluated by the TARA. TARAs are held every two years for a particular technology area. This year, evaluations are being done in biomedical, battlespace environments, ground/sea vehicles, materials and processes, space platforms, chemical and biological defense, and sensors, electronics and electronics warfare. TARA
OCR for page 69
Page 69 reviews all three levels of S&T investment—6.1, 6.2, and 6.3. The TARA reviews are conducted over a period of one week. A review team is asked to evaluate progress toward the individual objectives of DTOs and tries to determine whether that progress should be given a grade of green, yellow, or red. 3 The team is also asked whether a certain area—say, “Detection”—is addressing most of the technologic issues that need to be addressed. Is the research portfolio appropriate for the objective? If part of the program took a serious funding reduction, was the reduction justified? The TARA teams evaluate the programs for quality, for advances in state-of-the-art research areas, and for their scientific vision. Last year, 96% of the department's DTOs were given the grade of green. 1.2.2 Examples of evaluation by DTOs. The following two examples from the 2000 Chemical and Biological Defense Science and Technology TARA illustrate how evaluation works by using the DTOs as the unit of focus. For example, the TARA process gave the “Modeling and Simulation” DTO a yellow grade because of management problems. Because virtually all other DTOs were awarded greens, this was deemed serious enough to trigger a management reorganization. The DTO on “Force Medical Protection”: got a red grade because the TARA panel determined that poor technical assumptions and decisions had been made and that researchers were investigating a technology that was not appropriate for the desired objective. As a result, the defense organization performing the work has altered the technical approach to the objectives. 3 Green means that a program is “progressing satisfactorily toward goals”; yellow means “generally progressing satisfactorily, but some aspects of the program are proceeding more slowly than expected”; red means it is “doubtful that any of the goals will be attained.” These DTO ratings are described as semiquantitative metrics that reflect the opinions of independent experts.
OCR for page 70
Page 70 Sometimes, such questions are referred for more basic research before major changes are made. In a final example of DTAP DTOs, “Laser Standoff Chemical Detection Technology” received a yellow grade because reviewers decided that the project might, given current performance, have problems after 3 or 4 years. The basis for this judgment was that the project's objective was written in a way that didn't match well with what the researchers were actually doing. 1.2.3 Arationale for “holistic” evaluations. This process of evaluating performance by DTOs was established before the passage of GPRA, and the management and reporting chains have remained the same. The 6.1, 6.2, and 6.3 aspects of the DTO are all looked at by the same reviewing panel. Panels do not look at the 6.1 element independently, because it is assumed that basic research has important feedback loops with both the applied research and advanced technology development stages. 4 As an example, DOD is seeking a vaccine for the Ebola virus, and until the basis for such a vaccine is discovered, the research will be funded under the 6.1 category. If a potential vaccine construct is discovered, the vaccine will move to application and development stages, where it will be funded under 6.2 and 6.3 categories. As application and development proceed, further work with 6.1 funds might be needed to achieve a more complete basic understanding and more effective application. Under this same holistic approach, the “Laser Standoff” will be funded under 6.1; if 4 “Although the DOD model of the transition path from basic research (6.1) to applied research (6.2) to advanced development (6.3) implies a linear model, this is often honored more in the breach than the practice. The ‘push' of the linear process is augmented in DOD by a feedback process, whereby changing operational requirements and new results from multidisciplinary research continually keep the Basic Research Program on target.” DOD Basic Research Plan, 1999, p. I-5.
OCR for page 71
Page 71 the discovery proves out and can be applied and developed, the program will be moved to 6.2-6.3 phases. 1.3 Who does the evaluation of the research program under GPRA? The evaluation of basic and applied research is carried out by both internal agency panels of experts and by TARA review panels. Each panel consists of 10-12 technical experts from academe, industry, and nonprofit research organizations. Most TARA team members are recognized experts from the National Academies, the Defense Science Board, the scientific advisory boards of the military departments, industry, and academe. Each is chaired by a senior executive appointed by the deputy under secretary for S&T. These teams are asked to evaluate the programs for quality, for advances in leading the state of the art in research areas, and for their scientific vision. The department requires that two-thirds of each panel be experts from outside DOD. One-third of each panel's members are “refreshed” at the time of each reviewing cycle. Most areas have a 2-year reviewing cycle; chemical-biologic defense is reviewed annually per DOD's implementation of P.L. 103-160. At a higher level, evaluation is overseen by the Defense Science and Technology Advisory Group (DSTAG), which advises the deputy under secretary for S&T. DSTAG is a key decision-making body consisting of representatives of each service and defense agency. DSTAG provides oversight of an integrated S&T strategic planning process and effectively maintains responsibility for the entire S&T program. It oversees the work of the Basic Research Panel, which consists of eight people and must approve the BRP; the 12 technology panels responsible for preparation of the DTAP; and the 13 panels responsible for preparation of the JWSTP. These plans build on but do not duplicate the service-agency S&T plans.
OCR for page 72
Page 72 1.4 What criteria are used for the evaluation? In the broadest sense, all research activities—like any other DOD activities—must be justified under the mission goals of the agency. If a research project cannot demonstrate its mission relevance, it probably will not be funded.5 1.4.1 Evaluating performance. Most specifically, the department evaluates success in achieving the performance goals on two levels. At a lower level of aggregation, individual performance measures and indicators are scored at the end of each fiscal year to determine how performance compared with numeric targets set when the budget was submitted. At a higher level, annual performance goals are evaluated in two ways. First, results for each of the subordinate measures and indicators are evaluated within the context of overall program performance. Second, a determination is made as to whether a shortfall in expected performance for any metric or set of supporting metrics will put achievement of the associated corporate goal at risk. This subjective determination is trend-based and cumulative. A single year of poor performance might not signal that a corporate goal is at risk, although several years of unsatisfactory performance almost certainly will. 1.4.2 Evaluating basic research. At finer levels—for basic research that is expected to lead to new technologies—the department finds that evaluation through the use of metrics is 5 DOD's mission, as defined in its strategic plan, begins as follows: “The mission of the Department of Defense is to support and defend the Constitution of the United States; to provide for the common defense of the nation, its citizens, and its allies, and to protect and advance US interests around the world.” In practice, this mission includes considerable breadth, especially in regard to its Performance Goal 2.2, which is to “transform US military forces for the future.” This goal calls for a continued focus on “new generations of defense technologies,” which provides the foundation for its extensive S&T program.
OCR for page 73
Page 73 difficult or impossible. There is no reliable way to measure the success of basic research in the near term, because its outcomes are by definition unpredictable. There might be no payoff this year, or next year—until suddenly researchers see a new “data point” that can give rise to a whole new industry. For this reason, the department chooses to demonstrate the value—and outcomes—of basic research through retrospective achievements. The rationale for this is that the most valuable technologies for defense applications have derived from basic research done years or even decades before the first application. Therefore, the causative process can be more clearly illustrated by looking backward than by conjecturing about future results. According to the BRP, “a retrospective approach is a reminder that many of the technologies we now take for granted were brought about by investing much earlier in basic research.” The following examples all resulted largely from timely DOD investments in basic research: Owning the Night (night vision technology). Precision Guidance for Air Defense Missiles. The Airborne Laser. The Kalman Filter (more accurate data for navigation, guidance, and tracking). The Global Positioning System. Retrospective studies are intended to build support for the process, not for individual projects. It is not possible to point to the outcome of an on-going individual project. 1.4.3 Education and training. Other criteria used to evaluate programs include education and training. Clearly, human resources are essential to the future strength of DOD. The department funds more than 9,000 graduate fellowships per year, two-thirds as many as the National Science Foundation (NSF).
OCR for page 126
Page 126 program direction. The laboratories are shared by many users from academe, government, and industry. Most laboratories, such as Brookhaven National Laboratory on Long Island, are government-owned and contractor-operated (GOCO). The contractors may be universities, not-for-profit organizations, industries, or consortia. 1.1.2 A shortage of needed information. The scientific offices within DOE have found it difficult to comply with GPRA. The agency as a whole lacks a good system for tracking data that it needs to report on all its activities. The agency attempted to rectify this situation through a substantial investment in a federal government R&D database, but the lesson of that experience was that the agency needed its own system. DOE tried at first to use a systemwide framework that emphasized the agency's financial structure in the hope that it would be easy to reconcile with the budget. This financial overlap, however, did not accurately represent what the agency does, and it was divorced from actual planning. The linkages between this plan and what occurs at the program-manager level were weak, and the plan did not match well with GPRA. The General Accounting Office (GAO) was critical of the process, and administrators felt both external and internal pressure to change it. 1.1.3 A new planning model. Planners knew that a new planning model would have to be flexible because each new administration formulates different policies. But GPRA requires uniformity and a clear linkage between the performance plan and the performance report. The model would have to be able to show how the actions of DOE administrators result in excellent science at universities. As a result, SC is currently attempting to design a new strategic planning process to characterize the research it is doing and link its GPRA reports more logically to science.
OCR for page 127
Page 127 1.1.4 “Science is different.” The reason for this attempt is that scientific organizations are different from other agencies because scientific research is different from other activities. Therefore, strategic planning for science should also be different. Through a literature survey and focus groups, the agency is trying to develop a more “holistic view of the pathways that lead to excellent science.” The goal is to describe the pathways that an organization should take to achieve excellent science. The agency has been studying this subject for one year and has now described an environment that “fosters excellent research at the performing institution.” A suggested framework includes two dimensions (internal focus and integration, and external focus and differentiation) and four perspectives of effective organizations: human-resource development, internal support structures, innovation and cross-fertilization, and setting and achievement of relevant goals. 1.2 What level of unit is the focus of the evaluation? The agency has had difficulties in setting an appropriate level of unit for evaluation and in finding relevant performance measures. The individual programs had no early basis for deciding what level of aggregation to use or how many measures to apply. Therefore, some program-level reports have been very detailed and others more general, depending on the approach of individual program directors. 1.2.1 Reviewing methods. Below the level of programs (and of the GPRA performance reports), much of DOE's research budget is allocated to support individual, investigator-driven research projects in universities. These projects are evaluated individually by traditional peer review—that is, the same
OCR for page 128
Page 128 external, independent review system used by the National Science Foundation, the National Institutes of Health, and other agencies that support external research. For research supported and overseen directly by the agency, the unit of evaluation is usually the laboratory or the program within the laboratory. These units have long-established means of evaluation through external and program reviews that have been maintained for GPRA. Some subtlety is involved in evaluating large facilities during construction or operation. Most of them, such as Spallation Neutron Source, are “one-of-a-kind” projects whose very construction may involve cutting-edge science. Once they are operational, the “maintenance” expenses for such facilities may become difficult to distinguish from the “research” expenses for the purpose of GPRA. The agency also measures its contribution to S&E human resources. The agency maintains a commitment to supporting graduate and postdoctoral education; despite budget losses in the laboratories, it has roughly maintained existing levels of grants to universities. 1.3 Who does the evaluation of the research program under GPRA? 1.3.1 Peer reviewers. For the university grant program, virtually all individual projects are evaluated by regular peer review under the Office of Science's Merit Review System guidelines. This external process conforms to standard independent peer-review procedures. For laboratory research programs and facilities (e.g., Argonne National Laboratory and the Princeton Plasma Physics Laboratory), a formal documentation system similar to peer review is the norm. For example, BES evaluates the research projects it funds according to procedures described in Merit Review Procedures
OCR for page 129
Page 129 for Basic Energy Sciences Projects at the Department of Energy Laboratories. These procedures are patterned after those given for the university grant program. Peer review at the laboratories is intended to provide an independent assessment of the scientific or technical merit of the research by peers who have “knowledge and expertise equal to that of the researchers whose work they review.” 1.3.2 Technical experts. Major construction projects are evaluated by technical experts who look at relatively straightforward criteria, including cost, schedule, technical scope, and management (“Lehman reviews”). Reviews of major projects are typically held twice per year and may include 30-40 independent technical experts divided into six to eight subpanels. 1.3.3 Advisory committees. For each of the five SC programs, the evaluation procedure also includes advisory committees. For example, the 26-member Basic Energy Sciences Advisory Committee (BESAC) meets two to four times per year to review the BES program, advise on long-range planning and priorities, and advise on appropriate levels of funding and other issues of concern to the agency. BESAC subcommittees focus on more specific topics, such as neutron-source upgrades and DOE synchrotron radiation sources. Users of BES facilities are surveyed annually and asked for quantitative information about publications, patents, Cooperative Research and Development Agreements, prizes and awards, and other achievements. BESAC reviews do not feed directly into the GPRA process. The committee looks at peer reviews, contractor reviews, citation indexes, major awards, and any other relevant information; distills the information; and reports directly to the director of the Office of Science. The committee attempts to clarify why DOE is supporting particular programs and to gauge the contribution of individual facilities to the agency's research effort.
OCR for page 130
Page 130 1.3.4 Dual reviews for GOCOs. GOCOs are assessed by both DOE and the contractors. The agency does not rely on a contractor's review, because the contractor has an incentive to provide a favorable review to justify its compensation. Instead, the agency does annual “contractor appraisals” by using independent peer review. Ratings are given for Research quality. Relevance to mission. Research facilities. Research management. Overall appraisals are “rolled up” from individual laboratory reviews for all programs. These contractor appraisals affect performance fees and contract renewals. 1.4 What criteria areused for the evaluation? DOE's budget narrative system lists a summary of “budget guidance” items, beginning with program mission, program goal, and program objectives. DOE is attempting to conform GPRA's requirements with the budgetary requirements. 1.4.1 Separating excellence from relevance. The new system departs from the intent of the three COSEPUP criteria, however, by yoking the first two, excellence and relevance. These measures should be separated. Some excellent research may not be relevant to the agency's mission, and some relevant research may not be of excellent quality. 1.4.2 150 measures. SC has been using more than 150 performance measures, which DOE representatives (and GAO) acknowledge is an unwieldy number. This system has not been helpful in assessments to date, partly because the measures are not specific enough, do not clarify the DOE role, do not include means
OCR for page 131
Page 131 of validation and verification, and do not have clear links to the DOE strategic plan and budget. The agency's “emerging measures” are patterned more closely on the COSEPUP recommendations by including leadership. To measure the level of leadership, the agency is contemplating the use of the “virtual congress,” as suggested in the COSEPUP report. 1.4.3 Studying new criteria. The new criteria for performance metrics—now being studied by a group led by Irwin Feller, of Pennsylvania State University—are being examined in the hope of allowing a response to GPRA that is “grounded in research.” The criteria will attempt to include the following elements: Reasonable metrics (that is, reasonable for assessing a science agency). Excellence in science management (a 3-year study that benchmarks best management practices was launched in January 2000). Science “foresighting” (another 3-year study is examining science trends “out to 25 years”). Portfolio analysis (using information-technology tools, including deep data-mining, to characterize the research portfolios of the Office of Science, the federal government, and the “international S&T research portfolio”). Miscellaneous efforts (to apply organizational and management theory). 1.4.4 The need to take risks. The Office of Science also uses the criterion of risk in evaluating its programs. Without taking risks in research, programs and projects are unlikely to achieve the high-reward payoffs of the best investigations. Missions need flexibility in choosing research directions because peer review by itself is inherently conservative.
OCR for page 132
Page 132 1.5 How does the selection and evaluation of projects relate to the evaluation of the research program? Participants discussed at some length the “charter” of DOE and how DOE managers decide to include or exclude various programs or research topics from this charter. This issue is important in assessing the relevance of research for GPRA. 1.5.1 Complexities of project selection. The process of selecting projects is complex and combines information from the Office of Strategic Planning, input from advisory committees, and program decisions made internally. The users of DOE facilities come from many institutions, with many agendas, and DOE does not want to restrict the scope of research for those who are using the facilities in productive ways. 2. How is the result communicated to different audiences (e.g., S&T community, advisory committees, agency leadership, Administration, Congress)? In its report to Congress on the usefulness of agency performance plans, GAO noted that SC's FY2000 plan was “moderately improved” over the FY1999 plan but still bore little relationship to budgeting. The agency felt that more improvement was needed and for the succeeding year attempted to follow the structure of the budget more closely. Therefore, it organized the performance goals by budget accounts and annotated the performance goals with linkages to the strategic plan by identifying the strategic objectives they support. 2.1 Meeting with oversight staff. The agency also met with congressional staff and agreed to characterize its results by four categories: exceeded goal, met goal, nearly met goal, and below expectation. Each rank was based on deviation from the expectation established in the performance goal.
OCR for page 133
Page 133 This was done in response to GAO's concern that baselines and context had not been provided to compare with performance. The agency has also added a section on verification and validation under each decision unit, including periodic guidance, reviews, certifications, and audits. Because of the size and diversity of the department's portfolio, verification is supported by extensive automated systems, external expert analysis, and management reviews. 2.2 Communicating about the new model. There is considerable communication between DOE and GAO. After receiving a GAO report indicating that procedures for peer review vary among federal agencies, the House Science Committee asked GAO to investigate. GAO randomly sampled 100 BES research projects and concluded that the agency was performing merit review properly and following the established procedures. 3. How is the result used ininternal and external decision-making? 3.1 GPRA results do not yet influence funding. A common assumption about GPRA is that its results will be used to make funding decisions. However, many congressional staffs have not yet found ways to match performance results with funding decisions, because the process is still new and results are not often easily aligned with budgetary structure. 3.2 A critique of GPRA reports. Performance metrics do little good unless they embrace the scientific effort as a whole. For example, metrics of construction projects say little about the value of the science that they are intended to support. It is important to use quality, relevance, and leadership as evaluation criteria; the agency should not try to review the whole portfolio every year.
OCR for page 134
Page 134 Office of Science officials stated that they are suggesting a process very similar to this. 3.3 One result is DOE's new model. Indeed, one result of DOE officials' attempts to evaluate their scientific research for GPRA has been to convince the agency of the desirability of the new assessment model that they are studying. The goals of the study are to Investigate how funding agencies can foster excellent science. Focus on the impacts of interactions among the Office of Science and science-performing organizations. Identify relevant research in organizational effectiveness and science management. Fill gaps in knowledge or public-sector issues in management of scientific research. Formulate strategies for dealing with large changes in research and funding environments. Preliminary results have been mentioned above, but much of the study remains to be accomplished. The agency noted that its reviews do have results—that a poor review of the construction of the Spallation Neutron Source had resulted in substantial changes in senior management.
OCR for page 135
Page 135 DOE Focus Group Participant List November 29, 2000 Panel Members: Alan Schriesheim (Cochair) Director Emeritus Argonne National Laboratory Argonne, Illinois Morris Tanenbaum Retired Vice Chairman and Chief Financial Officer, AT&T Short Hills, New Jersey Participants: Eugene W. Bierly Senior Scientist American Geophysical Union Washington, D.C. Jack E. Crow Director, National High Magnetic Field Laboratory Florida State University Tallahassee, Florida Eric A. Fischer Senior Specialist in Science and Technology Congressional Research Service Library of Congress Washington, D.C. Richard D. Hazeltine Professor of Physics Director, Institute for Fusion Studies University of Texas at Austin Austin, Texas Michael Holland Program Examiner Office of Management and Budget Washington, D.C. Genevieve Knezo Specialist, Science and Technology Policy Congressional Research Service Library of Congress Washington, D.C. Robin Nazzaro Assistant Director US General Accounting Office Washington, D.C. Fred Sissine Specialist in Energy Policy Congressional Research Service Library of Congress Washington, D.C. David Trinkle Program Examiner, Science and Space Programs Branch Office of Management and Budget Washington, D.C.
OCR for page 136
Page 136 Agency Representatives: Patricia Dehmer Associate Director, Office of Basic Energy Sciences US Department of Energy Germantown, Maryland William J. Valdez Director of the Office of Planning and Analysis US Department of Energy Washington, D.C.
Representative terms from entire chapter: