Cover Image

Not for Sale



View/Hide Left Panel

5.5
Decadal Science Strategy Surveys: Report of a Workshop

Jack D. Fellows, Rapporteur, Joseph K. Alexander, Editor

Summary

The Workshop on Decadal Science Strategy Surveys was held on November 14-16, 2006, to promote discussions of the use of National Research Council (NRC) decadal surveys for developing and implementing scientific priorities, to review lessons learned from the most recent surveys, and to identify potential approaches for future surveys that can enhance their realism, utility, and endurance. The workshop involved approximately 60 participants from academia, industry, government, and the NRC. This report summarizes the workshop presentations, panel discussions, and general discussions on the use of decadal surveys for developing and implementing scientific priorities in astronomy and astrophysics, planetary science, solar and space physics, and Earth science.

WORKSHOP BACKGROUND

NRC decadal surveys provide broad assessments of the status of research fields, and they develop recommendations for scientific and programmatic priorities for future investments in the fields. Workshop participants from both inside and outside the government shared the view that the decadal surveys are important, especially because they have positive impacts on federal agency planning and decision making and on science community unity. Efforts by survey committees to draw wide community participation, engage in consensus building, and set explicit science-based priorities were repeatedly cited as features of the surveys that make them the gold standard for advice on research program planning.

While the surveys have been widely successful, government agencies and the scientific communities that have tried to follow survey advice have had to deal with several notable problems, including the following:

  • Cost and technical risk. Survey estimates of program cost and technology readiness have sometimes proved to be overly optimistic or have not included the full life-cycle costs of initiatives.

  • Resiliency and execution. Surveys have not always provided guidance on how to respond to budgetary, programmatic, or policy changes that significantly impact survey recommendations. Nor have they addressed the impacts on a balanced portfolio of large, medium, and small projects when a large development project encounters cost and/or technical trouble.

  • Planning, management, and collaboration. The charges to survey committees have often been very broad and open ended, and the surveys themselves have not always been timed to match agency or political planning cycles. Survey committees have also encountered problems when research programs are not well coordinated between agencies or nations. Some ask whether survey users should have recommendations on broad science objectives or on specific missions or facilities. Recent surveys have not always explicitly reconsidered the recommendations of previous surveys.

LEVERAGING PAST SUCCESSES AND IMPROVING FUTURE SURVEYS

The workshop brought together subject experts, previous survey committee members, and a broad range of survey users over a 3-day period to discuss the pros and cons of various approaches of recent surveys to issues such as those noted above so that future surveys can handle these issues as effectively as possible. This rich discussion touched on each of the three problem areas noted above, with the views expressed by many participants highlighted below.

NOTE: “Summary” reprinted from Decadal Science Strategy Surveys: Report of a Workshop, The National Academies Press, Washington, D.C., 2007, pp. 1-5.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 63
6 Summaries of Major Reports 5.5 Decadal Science Strategy Surveys: Report of a Workshop Jack D. Fellows, Rapporteur, Joseph K. Alexander, Editor Summary The Workshop on Decadal Science Strategy Surveys was held on November 14-16, 2006, to promote discus- sions of the use of National Research Council (NRC) decadal surveys for developing and implementing scientific priorities, to review lessons learned from the most recent surveys, and to identify potential approaches for future surveys that can enhance their realism, utility, and endurance. The workshop involved approximately 60 partici- pants from academia, industry, government, and the NRC. This report summarizes the workshop presentations, panel discussions, and general discussions on the use of decadal surveys for developing and implementing scien- tific priorities in astronomy and astrophysics, planetary science, solar and space physics, and Earth science. WORKSHOP BACKGROUND NRC decadal surveys provide broad assessments of the status of research fields, and they develop recom- mendations for scientific and programmatic priorities for future investments in the fields. Workshop participants from both inside and outside the government shared the view that the decadal surveys are important, especially because they have positive impacts on federal agency planning and decision making and on science community unity. Efforts by survey committees to draw wide community participation, engage in consensus building, and set explicit science-based priorities were repeatedly cited as features of the surveys that make them the gold standard for advice on research program planning. While the surveys have been widely successful, government agencies and the scientific communities that have tried to follow survey advice have had to deal with several notable problems, including the following: • Cost and technical risk. Survey estimates of program cost and technology readiness have sometimes proved to be overly optimistic or have not included the full life-cycle costs of initiatives. • Resiliency and execution. Surveys have not always provided guidance on how to respond to budgetary, programmatic, or policy changes that significantly impact survey recommendations. Nor have they addressed the impacts on a balanced portfolio of large, medium, and small projects when a large development project encounters cost and/or technical trouble. • Planning, management, and collaboration. The charges to survey committees have often been very broad and open ended, and the surveys themselves have not always been timed to match agency or political planning cycles. Survey committees have also encountered problems when research programs are not well coordinated between agencies or nations. Some ask whether survey users should have recommendations on broad science objectives or on specific missions or facilities. Recent surveys have not always explicitly reconsidered the recom- mendations of previous surveys. LEVERAGING PAST SUCCESSES AND IMPROVING FUTURE SURVEYS The workshop brought together subject experts, previous survey committee members, and a broad range of survey users over a 3-day period to discuss the pros and cons of various approaches of recent surveys to issues such as those noted above so that future surveys can handle these issues as effectively as possible. This rich dis- cussion touched on each of the three problem areas noted above, with the views expressed by many participants highlighted below. NOTE: “Summary” reprinted from Decadal Science Strategy Sureys: Report of a Workshop, The National Academies Press, Washington, D.C., 2007, pp. 1-5.

OCR for page 63
6 Space Studies Board Annual Report—007 Cost and Technical Risk Many participants noted that program cost estimates, which were based largely on information from the National Aeronautics and Space Administration, have become problematic and that too often costs have turned out to be as much as four times the original cost estimates. Participants urged that agencies continue to develop and improve cost and schedule parametric models for space missions. These models should reflect that software has become a dominant factor that impacts cost and schedule for many missions in ways just as important as tradi- tional factors such as spacecraft mass or power. Some experts also noted that instrument development is a leading contributor to cost risk, so that programs for reducing risk related to instruments are also especially important. In looking to future decadal surveys, participants appeared to agree that survey committees need to do four things: (1) include cost assessment and technology experts on survey committees, (2) obtain independent cost estimates and include cradle-to-grave life-cycle costs, (3) include cost uncertainty indexes to help define the risk of cost growth, and (4) use common costing approaches so that costs for different missions or facilities can be compared. The discussions repeatedly made the point that early cost estimates are better for comparing mission costs than they are for providing absolute cost projections, because most mission candidates are far short of being ready for a preliminary design review. Consequently, participants argued for the use of cost bins, or ranges, rather than specific costs. They also suggested that the technology being considered for a mission or facility needs to be well understood or characterized before a cost for it can be estimated. Resiliency and Execution Workshop participants acknowledged that unanticipated, but seemingly inevitable, changes in the budgetary, programmatic, and/or political environment present a challenge to the ability of government agencies and the research community to implement the priorities set in a survey report. They identified a number of steps that could enhance the resiliency of a survey’s recommendations and increase the likelihood that a recommended program could be executed as proposed. For example, participants argued that survey committees should establish metrics for creating and maintaining a balanced set of projects. Many speakers felt that survey committees need to recog- nize that proposing a range of missions or facilities (small, medium, and large, plus core research and technology activities) will be intrinsically more resilient than proposing mostly large, complex initiatives. They supported the idea that maximizing the number of projects to be selected competitively will enhance program resiliency. In discussions about how to cope with a large increase in the cost of projects and the attendant impacts of such growth, some participants felt that survey committees need to be very conservative about recommending large missions that are not yet well defined or understood. As one speaker put it, “If a mission isn’t understood well enough to derive a good cost estimate, then it doesn’t deserve a priority.” Several experts argued that even after a presumably well-founded cost estimate is in hand, a reserve (~20 percent of the expected mission cost) must be held separately from the project manager’s mission or facility development budget contingency funds. Finally, several participants noted that survey committees would be wise to start with a more realistic sense of agency budgetary and policy environments and to build stronger partnerships with agencies so that surveys can be more resilient. Planning, Management, and Collaboration Discussions about the timing of decadal surveys touched on both the time required to complete a survey and the time span over which a survey should look. There appeared to be broad agreement that 2 years is roughly appropriate for completing a comprehensive survey and that 10 years is about the right planning horizon. Agency representatives mentioned that they get plenty of conflicting advice, so that having a stable, long-term survey is very important. While there were also arguments that surveys should not be arbitrarily revised, it might make sense to build triggers into surveys, where cost growth or policy changes would require the survey to be revisited by a qualified group. A number of speakers suggested that scenario analysis be a part of any survey whose users want it to remain robust over a decade. Or, even better, that decision rules be included in the survey for dealing with unforeseen changes. Discussions of timing also drew suggestions that surveys need to be synchronized with other key planning processes (e.g., agency planning milestones and political cycles).

OCR for page 63
6 Summaries of Major Reports Several important factors for planning and organizing decadal surveys were mentioned repeatedly. First, former survey committee chairs noted that the survey charge must be clear and focused to avoid open-ended tasks and should be vetted fully with the research community and relevant government agencies. There was widespread agreement that surveys should have substantial community ownership and input. Some participants argued that all of the stakeholders (including the science community, federal agencies, and Congress) need to be part of the survey process (including definition, information gathering, and dissemination of results). A point that became clear in discussions of a survey’s assessment of cost, technology risk, and program execution was that survey committees need to include not only scientific disciplinary expertise, but also expertise in other areas such as hardware development, program management, systems engineering, cost estimating, and policy. There was also general agreement that survey planning should include how to disseminate the survey report to users and how to make it comprehensible and appealing to the public. Workshop participants expressed support for the idea that surveys should remain focused on science first so that there would be a clear and compelling presentation of the important science to be done and that the subsequent presentation of programmatic priorities and recommendations should always be traceable back to the science. Speakers also agreed that it is important to highlight applications that can be drawn from basic science missions and that can benefit society in an immediate and tangible way. Finally, many participants noted that priorities recommended in previous surveys should be readdressed in the context of the new survey and its priorities, and recommendations in the previous surveys should not be assumed to be guaranteed or irreversible. The workshop also stimulated discussion about several aspects of internal and external coordination. First, some participants acknowledged that while interagency and international cooperative programs have a chance of promoting cooperation across organizational boundaries, they tend to be a great challenge and rarely result in substantial cost savings. Therefore, agencies need to give extra attention to integrating such cooperative efforts as effectively as possible. Second, participants noted that as long as human exploration of space is a major national space goal, future surveys should not altogether ignore such exploration. However, science surveys should stick with the principle of “science first” while integrating research that can be enabled by human spaceflight into overall science priorities to be recommended in a survey report.