At the heart of each decadal survey is a recommended program strategy for the four Earth and space sciences, including a set of scientific priorities, recommended missions, and other programmatic elements that will help propel a discipline forward. The daunting task of formulating such a program and selection of constituent missions is perhaps the greatest challenge the survey committees and panels face. If the program is not feasible or executable, then the discipline is at risk of losing research opportunities that lay a foundation for further advancement. The session on decadal survey program formulation and opportunities for improvement focused on how the decadal surveys formulated their respective programs and how missions were selected. The panelists did not go into great detail about the cost and technical evaluation (CATE) process, which is the focus of Chapter 6.
|Moderator:||Alan Dressler, Observational Astronomer, Observatories of the Carnegie Institution for Science; Member, Space Studies Board; Co-Chair, Workshop Planning Committee|
Richard A. Anthes, President Emeritus, University Corporation for Atmospheric Research, 2007 Earth Science and Applications from Space Decadal Survey
Colleen N. Hartman, Deputy Center Director for Science, Operations, and Program
Performance, NASA Goddard Space Flight Center J. Todd Hoeksema, Senior Research Scientist, Stanford University, 2010 Astronomy and
Astrophysics Decadal Survey; 2013 Solar and Space Physics Decadal Survey Stephen Mackwell, Director, Lunar and Planetary Laboratory, 2011 Planetary Science
Decadal Survey Marcia J. Rieke, Regents’ Professor of Astronomy, University of Arizona, 2010 Astronomy and Astrophysics Decadal Survey
Panel moderator Alan Dressler began his introductory remarks by noting that program prioritization is one of the key activities conducted by a decadal survey. In this process, scientific aspirations—what we would like to do—come face-to-face with what we can do. The four divisions of NASA’s Science Mission Directorate—astrophysics, planetary science, heliophysics, and Earth science— have each commissioned their own decadal survey. The diversity of the resulting surveys is a reflection of the distinct differences between these four divisions—differences reflecting the nature of their science program, their responsibilities to the nation, and the unique culture of their respective disciplines. This session will focus on how each of the four science disciplines uses the decadal survey process to reach consensus about how best to accomplish each disciplines’ science and service goals and how to move their fields forward.
As Space Studies Board Director Michael Moloney explained in his overview of the decadal survey process (see Chapter 2), all decadal studies were undertaken by a survey committee supported by topical panels that prioritized science, evaluated individual initiatives, and constructed programs. The
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 33
5 Decadal Survey Program Formulation and Opportunities for Improvement At the heart of each decadal survey is a recommended program strategy for the four Earth and space sciences, including a set of scientific priorities, recommended missions, and other programmatic elements that will help propel a discipline forward. The daunting task of formulating such a program and selection of constituent missions is perhaps the greatest challenge the survey committees and panels face. If the program is not feasible or executable, then the discipline is at risk of losing research opportunities that lay a foundation for further advancement. The session on decadal survey program formulation and opportunities for improvement focused on how the decadal surveys formulated their respective programs and how missions were selected. The panelists did not go into great detail about the cost and technical evaluation (CATE) process, which is the focus of Chapter 6. Moderator: Alan Dressler, Observational Astronomer, Observatories of the Carnegie Institution for Science; Member, Space Studies Board; Co-Chair, Workshop Planning Committee Panelists: Richard A. Anthes, President Emeritus, University Corporation for Atmospheric Research; 2007 Earth Science and Applications from Space Decadal Survey Colleen N. Hartman, Deputy Center Director for Science, Operations, and Program Performance, NASA Goddard Space Flight Center J. Todd Hoeksema, Senior Research Scientist, W.W. Hansen Experimental Physics Laboratory, Stanford University, 2010 Astronomy and Astrophysics Decadal Survey; 2013 Solar and Space Physics Decadal Survey Stephen Mackwell, Director, Lunar and Planetary Institute; Member, Committee on Astrobiology and Planetary Science; 2011 Planetary Science Decadal Survey Marcia J. Rieke, Regents’ Professor of Astronomy, University of Arizona; Member, Space Studies Board; Member, Committee on Astronomy and Astrophysics; 2010 Astronomy and Astrophysics Decadal Survey INTRODUCTORY REMARKS Panel moderator Alan Dressler began his introductory remarks by noting that program prioritization is one of the key activities conducted by a decadal survey. In this process, scientific aspirations—what we would like to do—come face-to-face with what we can do. The four divisions of NASA’s Science Mission Directorate—astrophysics, planetary science, heliophysics, and Earth science— have each commissioned their own decadal survey. The diversity of the resulting surveys is a reflection of the distinct differences between these four divisions—differences reflecting the nature of their science program, their responsibilities to the nation, and the unique culture of their respective disciplines. This session will focus on how each of the four science disciplines uses the decadal survey process to reach consensus about how best to accomplish each disciplines’ science and service goals and how to move their fields forward. 33
OCR for page 33
As Space Studies Board Director Michael Moloney explained in his overview of the decadal survey process (see Chapter 2), all decadal studies were undertaken by a survey committee supported by topical panels that prioritized science, evaluated individual initiatives, and constructed programs. The panels are more representative of their parent community than their parent survey committee because they reach more deeply into their respective scientific communities. Dressler stated that program prioritization is the heart of the decadal process. Contrary to the views of presenters in the previous session, Dressler does not think it is possible to strictly prioritize diverse scientific activities without reference to the means by which those priorities might be implemented. He illustrated his point by referencing his experience as chair of one of the 2010 astronomy and astrophysics decadal survey’s program prioritization panels (PPPs). The five science frontiers panels (SFPs) reviewed some 100 white papers solicited from the astronomy and astrophysics community. After many months of effort, each SFP prioritized the science in their respective subdiscipline and identified four key questions—for example, What determines star formation rates and stellar masses? How do rotation and magnetic fields affect stars? What are the flows of matter and energy in the circumgalactic region? How do cosmic structures form and evolve? and How did the universe begin? There is no way, Dressler contended, that any group could rank these questions from diverse subdisciplines in priority order. Even within a single subdiscipline it is difficult, if not impossible, to rank science priorities without reference to the means by which that priority might be addressed. Dressler’s final remarks concerned the questions that should be addressed in ranking scientific and mission-related issues. Of myriad questions, science-related questions consider whether the science is • Transformational science or incremental science? • A fundamental physics measurement? • Of interest to the general public? • Broad science with a focus? • Along a path of scientific investigation that has great future promise? Mission-related questions consider whether the mission is • Technically feasible? • Mature technology, or requiring further development? • Technology useful outside of astronomy? • Straightforward or complex, with irreducible risk? • Strong educational and public outreach component? • Ready to go sooner rather than later? • A bargain, of moderate cost, or killer expensive? • International collaboration or going it alone? • Along a path of technological development that is valuable for future missions? PANEL DISCUSSION During their discussion, the moderator and panelists talked about their experience with and opinions on the following topics: • Science goals, • Source of program and prioritization criteria, and • NASA’s perspective. 34
OCR for page 33
Science Goals Moderator Alan Dressler began the panel discussion by asking the panelists their perspectives on the question, How does the science and service goals of your discipline, community activities, and relationships with government agencies define your decadal process? Richard Anthes commented that the 2007 Earth science and applications from space decadal survey decided early in its process that applications and societal benefits would play an equal role with basic science in the committee’s deliberations. Subsequently, it was realized that advancing the basic science of the Earth system was in itself a societal benefit. Such considerations lead to the interdisciplinary and thematic orientation of the survey’s panels where basic science, applications, and societal benefits were considered on an equal footing. When asked, Did this organizational approach work? Anthes replied, Yes, to a certain extent. When it came to the prioritization of the specific missions and other initiatives proposed by the panels, the survey committee awarded a higher ranking to those activities that addressed both scientific and societal questions. However, it was clear to Anthes that the panels that were more closely oriented toward societal issues had more difficulty in defining, for example, specific measurements relevant to their concerns. Todd Hoeksema noted that four issues make solar and space physics somewhat different from some of the other decadal disciplines. First, it is not possible to consider the individual components of the Sun-Earth-heliosphere system in isolation. As a result, the 2013 solar and space physics survey committee adopted a panel structure reflecting the systems approach necessary to address the particle and field phenomena occurring in the Sun-Earth system. Second, like Earth science, solar and space physics has both a scientific and an operational component. Changes in the near-Earth space environment driven by solar activity can have significant impacts on technological systems such as power grids and navigational systems. Third, scientific progress in understanding the diverse particle and fields’ environments that exist between the surface of the Sun and the boundaries of interstellar space requires a highly dispersed, but spatially diffuse system of in situ and remote sensing platforms, the so-called heliophysics systems observatory. The maintenance of the constellation of in situ spacecraft and remote-sensing systems comprising this observatory is essential to the discipline. Finally, solar and space physics does not really require large missions. Small- and mid-size, principal-investigator led missions best address the scientific aspirations for the solar and space physics community. The survey’s structure of panels and cross-panel national capabilities working groups were a reflection of the interplay of these four factors. Stephen Mackwell began his answer by noting that planetary science is destination-oriented, and so a panel structure oriented around destinations was most appropriate. The 2011 planetary science decadal survey’s five panels collectively encompassed all of the planetary bodies that exist in the solar system. The panel structure also reflected some of the technical challenges associated with exploring particular parts of the solar system. The terrestrial planets (i.e., the area of responsibility of the Inner Planets and Mars panels) can be readily accessed by small- and medium-class (e.g., Discovery and New Frontiers) spacecraft using solar-power systems. Whereas the planets and satellites of the outer solar system (i.e., the area of responsibility for the Giant Planets and Satellites panels) are more readily addressed by medium- and large-size (e.g., New Frontiers and Flagship) spacecraft using nuclear-power systems. Similarly, observations with ground- and space-based telescopes play a greater role in studies of the innumerable comets, asteroids, and other primitive solar system bodies than they do, for example, to the study of the terrestrial planets. In the planetary science survey, the primary responsibility of the panels was to identify the key science questions within their respective areas of responsibility. Each panel developed three or four themes encompassing the science goals that could be achieved for their specific planetary bodies in the coming decade. These themes and associated science goals were forwarded to the survey committee for discussion and synthesis into overarching themes for solar system science. These discussions within the panels and within the survey committee led, quite naturally, to a prioritization of the most important science goals for the coming decade. Each panel then looked at how these science goals might be addressed, and out of that process mission concepts were formulated. 35
OCR for page 33
Marcia Rieke began her response by reminding the audience that the 2010 astronomy and astrophysics decadal survey was organized around three components: science prioritization panels (i.e., the SFPs), implementation prioritization panels (i.e., the PPPs), and panels dealing with the status of the community (i.e., the infrastructure study groups). In addition, astronomy has both ground- and space- based components. Separating the science and implementation prioritization aspects of the survey worked well. Developing science goals first and then having separate groups determine how to implement the science goals was a good idea. It avoided an issue Rieke had seen while working on two prior decadal surveys: picking specific science goals because they were what a preselected mission was good at doing. Rieke noted that each of the PPPs had to wrestle to ensure that the SFPs’ scientific desires matched up with what the missions could do and make certain that the mission priorities folded in and reflected the science priorities. The survey committee then had to pull together all of these threads and turn them into a self-consistent final report. A comparison of the published survey report 1 with the published panel reports 2 reveals that the survey committee did not always agree with the implementation rankings coming out of the PPPs. This arose because the survey committee had to construct an integrated program. Thus, for example, the survey committee gave a higher priority to one ground-based project— the Large Synoptic Survey Telescope (LSST)—than was given by the relevant PPP. This was because the LSST could address key science goals relating to dark energy in a manner complementary to a priority space-based project. Source of Program and Prioritization Criteria Alan Dressler then posed a series of related questions to his panelists concerning the source of the programs that were evaluated and prioritized in the context of each of the four recent decadal surveys: Were the programs a legacy from previous surveys? Did they come from outside of the survey, or where they generated within the survey process? If the latter, what were the relative roles of the survey committees and the supporting panels? Was the initial list of possible programs much too large and, therefore, requiring an early winnowing-down phase? If so, how was this winnowing done? What role did NASA play in generating mission concepts? and What basic criteria were used to prioritize initiatives and assemble a program? Scientific merit and technical feasibility are obvious criteria, but other potential considerations include the following: • Broadness of impact in the field, versus, for example, a potentially transformational and fundamental measurement; • Potential for international collaboration; • Value to the nation; • Public interest and education and outreach; • Balance across different sub-disciplines and across mission size; and • Cost, readiness, complexity, and risk. In addition to who applied the criteria, when are they applied in the decadal process, and how well did they work? Richard Anthes responded that the 2007 Earth science decadal survey committee issued a community-wide request for white papers. Some 135 responses were received, and they were forwarded to the panels, together with other relevant documents issued by the National Research Council (NRC), the National Oceanic and Atmospheric Administration, NASA, and the World Meteorological Organization. 1 National Research Council (NRC), New Worlds, New Horizons in Astronomy and Astrophysics, The National Academies Press, Washington, D.C., 2010. 2 NRC, Panel Reports—New Worlds, New Horizons in Astronomy and Astrophysics, The National Academies Press, Washington, D.C., 2011. 36
OCR for page 33
Then, via a somewhat messy and unstructured process, the panels generated a list of approximately 35 recommended measurements. The survey committee took these recommended measurements and through a process of down-selecting and packaging devised the survey’s 17 recommended missions. The prioritization criteria included scientific merit and societal benefit, but it also included the ability to address multiple science or applications goals (i.e., “bang for the buck”), technical readiness, and affordability. Affordability was an important criterion. But, since this survey was undertaken prior to the requirement to obtain independent cost estimates, there was no CATE process to identify likely costs. However, the survey committee did estimate costs via a variety of mechanisms. Moreover, decision rules were crafted as to how the priority of a particular mission would change if its cost increased. In hindsight, these rules were naïve; a 10 percent increase in cost was sufficient to decrease a mission’s priority, and a “substantial” increase would trigger its termination. The survey estimated that the cost of its highest- priority mission should be about $300 million. This mission is now budgeted at about $900 million, and it is still at the top of the queue. Anthes commented that what is really needed are estimates accurate to one significant digit. Todd Hoeksema responded that the 2013 solar and space physics decadal survey committee used a similar process as the Earth scientists. The first goal was to identify key science issues. White papers were solicited on science questions and science topics, not mission concepts. Nearly 300 white papers were received, and these, together with the relevant NASA roadmaps, were sent to the panels. The panels discussed and digested all of this input and came up with their lists of the most important science questions and initial ideas concerning missions that might address these questions. The survey committee considered each of these mission ideas and asked each of the three panels to identify their top four mission concepts for additional study by a design team (which was kept separate from the CATE team) at the Aerospace Corporation. Several survey committee members worked closely with the mission-design team, and the interactions between the survey committee and Aerospace were quite dynamic. Based on the Aerospace design studies, 6 of the 12 concepts were chosen for scrutiny by the Aerospace’s CATE team. The prioritization criteria the 2013 solar and space physics survey committee used to select mission concepts were scientific merit, relevance to societal issues, technical readiness, and timing relative to the solar cycle or other missions. Scientific merit had several components, including relevance to advancing a particular discipline and also contribution to advances in areas relevant to several disciplines. Societal issues focused on benefits to the nation accruing from a better understanding of space weather events. Programmatic issues were also considered. For example, mission concepts were not formulated to address science goals if they could reasonably be addressed in the context of the Explorer program. Another programmatic issue of concern was the appropriateness of the cost of a particular mission concept: Was it worth its cost? Finally, programmatic balance was an important consideration: Did we have an appropriate mix of small, medium, and large missions? The survey committee’s view was that NASA’s existing mission mix was skewed in favor of large missions, and we wanted more small missions. Stephen Mackwell stressed that a decadal survey is intended to embody a community-wide consensus, and this requires community input. The 2011 planetary science decadal study used many, if not all, of the mechanisms already described to generate input to both the science prioritization and mission formulation phases of the survey. In particular, 199 white papers were received from planetary scientists around the world. The survey also reached out to NASA’s community-based assessment and analysis groups and organized town halls and other outreach activities at major planetary science conferences. 3 In planetary science, the flow from science to missions was clear. (1) Science questions were developed by the panels based on the input received from all sources, plus internal deliberations. The 3 For more information on the community-based assessment and analysis groups see, for example, Lunar and Planetary Institute, Analysis Groups, available at http://www.lpi.usra.edu/analysis/. 37
OCR for page 33
panels’ science questions were then integrated across the panels by the survey committee. (2) Mission concepts were developed by the panels to address these science questions. (3) Selected mission concepts were forwarded from the panels, via the survey committee, to leading mission design centers for studies to determine their technical feasibility. Each center-based, mission concept design team included at least one panel member who acted as the “science champion.” (4) The panels used the results of the concept design studies to inform their ranking of the most promising mission concepts. (5) The survey committee prioritized the mission concepts identified by the panels. This multistep process was very dynamic in that there was continuous communication and feedback between the panels and the survey committee. The top large (flagship) and mid-size (New Frontiers) mission sets identified by each of the panels were forwarded to the survey committee where they were discussed and prioritized. The primary mission prioritization criterion was science return per dollar. The science return was determined in a qualitative manner, and the cost figures came out of the CATE studies. Another important criterion was balance across the solar system and in terms of mission sizes. The prioritization process was driven by consensus, and the survey committee held few, if any, votes. In accordance with the statement of task, large missions were selected and then ranked, whereas mid-size missions were selected from among those identified by the panels, but they were not ranked. Small-size (i.e., Discovery) missions were not prioritized. Marcia Rieke began her response to the moderator’s questions by noting that the science community was the source of the science and mission goals discussed in the 2010 astronomy and astrophysics decadal survey. The moderator asked, But didn’t the survey invent the Wide-Field Infrared Survey Telescope (WFIRST)? Rieke replied, No, WFIRST was an amalgamation of three independently proposed activities that the survey committee recognized would require exactly the same hardware. The potential science objectives and missions to be prioritized came to the survey committee in response to a series of requests for information (RFIs) issued to the community. The initial RFI was quite general and generated input on a range of different activities, including theoretical studies, technology development activities, and various types of ground- and space-based hardware. Assessment of the input led to subsequent RFIs aimed at obtaining increasingly detailed information about a smaller and smaller number of promising activities. There was an initial worry within the survey committee that the winnowing of potential activities was taking place too early in the study process, because resources available were such that only a dozen or so activities could be subject to the CATE process. But subsequent efforts to package the various initiatives within any kind of realistic budget plan showed that these worries were unfounded. The primary prioritization criterion applied by the survey committee to the activities identified by the PPPs was a direct mapping to key questions identified by the SFPs. There was a whole suite of additional criteria, including the ability to address multiple questions in more than one subdiscipline; the extent to which a particular activity would contribute to the health of the community; the value to the nation; the value as a precursor activity to something else; and for all, of course, technical readiness, cost, and risk. Several activities were proposed that the survey committee really wanted to do—for example, the direct detection of exoplanets—but none appeared to be technically ready. Conversely, there were activities the committee really wanted to do that were technically ready; LSST was the quintessential example. A major difference between the 2010 astronomy and astrophysics decadal survey relative to predecessor surveys was that there was no “grandfathering.” Previously, projects blessed by one survey retained their priority in a subsequent survey. This time, only those projects in an advanced stage of development remained. If the survey committee had not done this, there would have been no opportunity to consider new activities. 38
OCR for page 33
NASA Perspective Colleen Hartman was then asked by Moderator Alan Dressler for a NASA-centric perspective on what she has heard so far. Hartman responded by posing and then answering the question, Why does NASA care about the decadal surveys? The answer is that they are both swords and shields. They are swords because a high decadal ranking provides a program manager with an argument supporting a new activity. They are shields because they protect highly ranked programs from attack. The first decadal survey in any discipline is always difficult. The 2007 Earth science decadal survey was an amazing achievement because nobody at NASA thought that that community could be “corralled.” Hartman noted that there are plenty of areas for improvement. The issue of mission costing has been raised multiple times so far. The NRC is not going to do a better job of costing their recommended program than can NASA. It is an inherently difficult endeavor and a primary cause of many of the criticisms leveled against NASA From a NASA perspective, the principle challenge facing future decadal surveys is how to maintain the NRC’s “golden glow” NASA and the NRC should each do what they do best. At one time, the idea of decadal surveys containing only science priorities was attractive to Hartman. But now she is convinced that such a decadal survey would be too vague to serve the role of both a sword and a shield. While the future will be dominated by more and more complex science questions, those holding the purse strings will be more and more focused on total cost and cost control. In such an environment, a “three-worlds” approach might be best. That is, future surveys should formulate their programs in the context of three budgetary scenarios. First is the “heavenly” world we all hope we have. Second is the “nominal” world representing an extrapolation of currently prevailing circumstances. Third is the “evil” world we do not want to experience. NASA would specify these budgetary scenarios, and there would be sufficient spread between them to encompass all likely fiscal environments. Having NASA specify the three scenarios in the statement of task for a new decadal survey would help protect the NRC’s reputation and leave it free to do the things it does best. AUDIENCE INTERACTION The moderator, Alan Dressler, then commented that because the discussions ran longer than anticipated, the panel did not have time to address the final questions he had formulated. Of these unposed questions, the one most appropriate for discussion with the audience is: Did the processes employed by the various recent decadal surveys fairly represent the community’s interests and desires? Because no one in the audience was willing to explore the panel’s views on this topic, the floor was open to more general questions. Workshop participants made comments and posed questions to the panelists, as described below. Topics discussed included the following: • Implementing the three-worlds scenario, • Improving mission management, and • Tension between science and missions. Implementing the Three-Worlds Scenario An audience member asked Colleen Hartman how the three-world scenario would be implemented. Hartman responded by suggesting that the “heavenly” budget might include an increment of some $200 million over the baseline for the decade. The nominal budget would reflect either the President’s budget proposal for the year the survey was initiated or the enacted budget for the prior year. 39
OCR for page 33
The “evil” budget would represent a worse-case scenario, which is included to enable the survey to document all the reasons why it would be a bad option for the nation. Following Hartman’s response, a member of the audience commented that such an approach might have helped the Earth science decadal survey committee, given the dire state of the nominal budget at that time. Finally, Todd Hoeksema noted that the option of recommending more than can reasonably be implemented in the course of a decade has some advantages. Improving Mission Management An audience member asked the panelists what incentives exist within NASA to implement projects on time and under budget. Hartman explained that this is a complex question with many facets. NASA has many defined processes and procedures concerning project management, and many of them are not necessarily consistent with implementing projects on time and within budget. Hartman is leading a team at the Goddard Space Flight Center wrestling with these inconsistencies. Similar efforts are underway at other NASA centers. NASA is not the only organization with such problems. Another audience member asked if anyone at a NASA center gets a reward for implementing a project under budget. Hartman replied that the Radiation Belt Storm Probes mission came in under cost, and the relevant individuals were rewarded. A member of the audience commented that Juno and GRAIL also came in under budget; however, the GEMS mission was over budget, and it was cancelled. The biggest reason missions go over budget, the audience member said, is because the proponents of very complex activities are underbidding. Decision rules in the form of not-to-exceed cost for specific missions might be an appropriate mechanism. A different member of the audience commented that competition makes a difference. Principal investigator (PI)-led missions tend to maintain cost better than center-led missions. The JWST cost escalation would never have occurred if its budget had been capped. Another member of the audience responded that he was involved with a recent study about the cost of NASA’ planetary science missions, where the results indicated that the track record of cost overruns for PI-led missions was, until recently, as checkered as that of center-led missions. The turning point with respect to cost discipline among the PI missions appeared to be related to the cadence of selections. That is, proposers saw that cost overruns on selected missions were having a significant impact on the rate of selection of new missions. Tension Between Science and Missions An audience member asked if it would make sense to separate the science and mission prioritization of future decadal studies (as was done in the 2010 astronomy and astrophysics survey) and to publish the former, together with decision rules, prior to initiating the latter. Marcia Rieke began her response by explaining that the astronomy survey’s panel reports were published separately, so the Science Frontiers Panel reports could stand alone. But, there is insufficient insight in those panel reports to meld the science you want to do with what is actually possible. There is enormous value in having a survey committee look at a suite of science questions and a suite of mission concepts simultaneously. There are things that the survey committee can do that NASA cannot, such as assessing whether or not a particular mission concept can address multiple science questions. The reason for this is simple: scientific and technical expertise encompassed by a decadal survey committee and its panels is far greater than that which exists within NASA headquarters. The blending of key scientific questions into a suite of potential missions represents the essence of a decadal survey. Stephen Mackwell agreed that it would be possible to draft a series of prioritized science questions spanning the solar system; but the resulting document would be particularly shallow if it did not also address the means by which those questions would be addressed. In principle, you can ask the question: Where do you draw the line between defining science questions, discussing how to implement 40
OCR for page 33
those questions, and designing specific missions? But actually drawing that line would be very difficult. The planetary science decadal survey took 2 years to complete. If certain key activities had not been done in parallel, then it could easily have taken another year. So, while it might be possible to split up a future survey into discrete science and mission phases, it might not be practical. Alan Dressler commented that a previous session might have created the impression that the missions resulting from the recent decadal surveys were very rigid and were described in so much specificity that they impede their implementation by program managers at NASA Headquarters. It should be clear to all concerned that the missions recommended by decadal surveys are just notional concepts whose detailed specification is to be determined by NASA. WFIRST is a good example; there are at least two or three different implementations of this conceptual mission now under consideration. So the magic that takes place at NASA—that is, the transformation of loosely described decadal concepts into actual hardware—is not being usurped by the survey committees. Marcia Rieke reminded the audience that decadal surveys frequently address the needs of multiple agencies. Astronomy, for example had NASA’s space-based component and the National Science Foundation’s (NSF’s) ground-based component. An astronomy decadal survey that addressed only science questions and leading implementation issues for NASA would give short shrift to the interests of NSF. Mackwell reinforced Dressler’s point concerning mission specificity by noting his experience on the first midterm planetary science decadal survey review. At the time, he was concerned by the PIs’ over-specific interpretation of the missions described in the 2003 planetary science decadal survey. 4 As a result, the 2011 survey strived to make it clear to all that its recommended missions were just notional concepts. A certain degree of specificity is necessitated because of the congressional mandate to obtain independent cost estimates and, thus, the realities of the CATE process. There is nothing in the 2011 planetary science survey report to prevent a clever PI or project team from finding and proposing a better, more efficient, or cheaper way to implement the science goals of a specific mission. A member of the audience commented that the NRC should specify four items—the cost, schedule, capabilities, and risk to mission success—when describing a mission. Specifying these four items will enable NASA program managers or PIs to make the appropriate trade-offs to decide when the mission has gone out of bounds. Managing the risk associated with a portfolio of missions specified using these four factors is relatively straightforward. If a specific mission exceeds the box defined by these four factors, then it is cancelled, and the other missions in the portfolio carry on. In response, Dressler noted that if NASA wants future survey committees to take this approach, they should specify such in the statement of task. 4 NRC, New Frontiers in the Solar System: An Integrated Exploration Strategy, The National Academies Press, Washington, D.C., 2003. 41