The Cost, Risk, and Technical Readiness Evaluation Process
In response to the statement of task, an independent cost appraisal and technical evaluation (CATE) process was established for the projects considered for recommendation in this report. Implementation of the CATE process was performed by an experienced competitively selected contractor, the Aerospace Corporation, using a process operating in parallel with the committee process described in Chapter 7. The objective of the CATE process was to judge the readiness, technical risk, and schedule risk for the activities under consideration. Schedule estimates and cost appraisals were developed for each activity. While past surveys have focused solely on the cost, the current survey committee believes that this number, although important, is only part of the story. Moreover, cost estimates for projects at an early stage of development are inherently less certain because not all the design requirements have been specified and not all technical risks have been retired.
For consistency and ease of comparison, the CATE reports for space missions give an appraised program cost in FY2010 dollars. The cost threshold for the CATE process was established at approximately $350 million at NASA and approximately $75 million for NSF and DOE. The Committee for a Decadal Survey of Astronomy and Astrophysics developed a cost-spreading tool separate from the CATE process using a 3 percent per annum base inflation rate over the decade, to develop some notional funding profiles against possible agency funding wedges. The comparison of required funding profiles with future agency budgets was done, the committee believes, in as realistic a manner as possible, although it recognizes the considerable uncertainties in both summed needs of the recommended projects and in the funding available in the future.
The parallel implementation of the committee and CATE processes, shown in Figure 7.1, allowed for timely and efficient data gathering and fact finding by the CATE contractor and the committee while maintaining the independence of each activity. As one of the first activities of the survey, before the CATE process was fully developed, the committee solicited Notices of Intent (NOI) to gauge the kinds of research activities it could expect to have to assess during the course of the survey. This first step was followed by receipt of white papers and then two request-for-information cycles (RFI-1 and RFI-2), resulting in multiple submittals from candidate activities. The output of the RFI-2 process was the selection of candidates to be put forward for detailed CATE process analysis. The proposed candidates were selected by the Program Prioritization Panels (PPPs) based on scientific priorities together with a scientific evaluation of the technical approaches. The candidates were approved by the committee.
The CATE component of the process was iterative in the early stages, starting with a technical evaluation of the selected candidates and then proceeding to follow-up questions to individual project teams as required. The CATE and survey processes were linked, through direct communication between the contractor and committee and panel members, as well as presentations to the committee and PPPs. The interactions focused on ensuring the quality of the assessments by the contractor and engaging the technical expertise of the panels and the committee. Discussions between the PPPs and the cost contractor were essential to ensure that project details were not misinterpreted by the contractor. Intermediate results were then presented to the full committee in October 2009 at the committee’s fourth meeting, followed by several more iterative steps focused on reviewing the final assessments and appraisals for accuracy, realism, and consistency by the committee.
Despite the considerable interaction with the committee and panels, the survey process maintained the independence of the contractor so that its final analysis was free from undue influence by either the committee itself or by interests outside the survey. This independence was accomplished by establishing the contractor as a consultant to the National Research Council rather than a direct participant in the committee effort. Therefore, although the committee worked closely with the contractor to provide technical inputs as requested, as well as expert review and commentary, the final result was accepted and certified as independent work performed by the contractor alone. Equally important to the independence of the contractor was the committee’s responsibility for reviewing the contractor’s work and exercising its judgment in accepting the contractor’s results.
A second essential consideration affecting the CATE process was the recognition that ground-based and space-based systems are fundamentally different with respect to how they are funded and developed. This disparity profoundly influenced the methods by which the ground and space systems were evaluated and validated by the contractor. The space-based systems were evaluated statistically using the process
presented in Figure C.1. This process utilized an extensive database available to the contractor from many past projects performed by NASA and an associated array of experienced support contractors. Thus, despite some mission-unique elements, the size and scale of the space projects were well within the experience base of the contractor and the parametric model employed for the analysis by the contractor.
Ground-based systems required a different treatment since they are typically developed by a consortium consisting of universities and/or federally funded agencies with an associated mix of government and private funds. Management and review of these activities involve unique institutionally driven processes compared to space-based activities. A relevant cost and schedule database for past large ground-based projects is largely nonexistent. Furthermore, the size and cost for large ground projects have approached those being built for space only in the past decade. Each of the ground-based projects evaluated in the CATE process required an extrapolation from existing facilities using key discriminating factors following the process shown in Figure C.2.
Because the available database for ground projects did not support a parametric analysis as used for the space projects, a bidirectional analysis was employed. A project’s own bottom-up costs were assessed by the contractor in consultation with the committee and panels. Once this first element was completed, the contractor then identified the specific discriminating elements requiring cost or schedule analogies and extrapolation. Further information was requested of the activities being assessed when information gaps were identified. This approach was considered by the committee to be the most appropriate method for achieving a realistic cost estimate for the ground projects, and it was successful as demonstrated by the contractor’s being able to provide an assessment of technical readiness, risk, and cost within the
following limitations. The contractor had no independent basis for evaluating the operations costs estimates provided for any ground-based project. Those appraisals were constructed by the survey committee on the basis of project input and the experience and expertise of its members. For some projects, the data supplied by the projects was insufficient for the contractor to do a robust independent cost evaluation. Instead, the evaluation was limited to technical readiness and risk, as well as identifying those elements of the projects that drive the risks. Productive interactions between the contractor and the panels clarified a number of issues.
As would be expected, the cost appraisal process is highly dependent on both the maturity of the project design and the detail and quality of the available technical information. Overall, the detail of the RFI-2 inputs was excellent, although the majority of the projects evaluated were at a Pre-Phase-A stage of development. In the case of space projects, the dominant cost elements of the space projects are the instruments (20 percent), spacecraft system (12 percent), cost reserves (19 percent), and mission threat elements (18 percent), corresponding to approximately 70 percent of the total mission cost. The threats corresponding to mass and power, launch vehicle, and schedule were quantitatively evaluated by the committee at a general level and then tailored as to how they were applied to specific missions. Ground projects typically were found to have shorter development schedules than might be realistic and smaller cost reserves than might be prudent.
Because of the immaturity of some of the proposed activities, cost uncertainties are higher than typical for activities moving into development either via NSF’s MREFC process or at the preliminary design review stage for NASA and DOE. The committee worked with the contractor to develop an acceptable set of quantitative metrics that could be used to fairly calculate the probable delta cost driven by the assessed maturity of each mission. These metrics included estimation of growth of applicable system resources such as power and mass along with mission-specific factors.
The end result of the incorporation of cost uncertainties is the cost histogram shown in Figure C.3 for the JDEM-Omega (similar to WFIRST), LISA, and IXO missions. The cost uncertainties are shown as “threats” in the figure. The incorporation of threats and risks resulted in the CATE cost totals averaging 55 percent higher than the projects reported based on NASA estimates. The associated S-curves are shown in Figure C.4. An S-curve represents the cumulative probability that a project will be completed at the given total cost. The NASA cost estimates came in at approximately the 10 to 15 percent point on the S-curve representing the statistically derived CATE cost for the same mission. Based on historical metrics, it would be expected that the NASA estimates would grow to approximately 30 to 50 percent on the S-curve at end of mission formulation unless efforts are made to descope or simplify the mission concepts.
The costs shown in Figures C.3 and C.4 for the JDEM-Omega, LISA, and IXO missions represent the full cost to NASA without consideration of ESA participation. The contractor also developed a cost metric for a notional 50-50 NASA-ESA joint program incorporating a 25 percent “foreign participation” penalty based on an assessment of similar missions. Figure C.5 shows the resulting cost to NASA with a comparison of the 100 percent and 50 percent participation shares. Note
that the 50 percent number shown in Figure C.5 does not reflect a perfect 25 percent penalty factor due to some minor variances in the cost distribution for the individual missions.
Once the CATE effort was complete, an independent validation of the cost estimates was performed using the Complexity Based Risk Assessment (CoBRA) tool developed by Aerospace Corporation (schedule evaluations were also performed but are not presented). Figure C.6 shows the mapping of the three space mission candidates, JDEM-Omega, LISA, and IXO, on a plot representing the results of approximately 40 analogous successful missions (indicated by green triangles).
The results show excellent correlation with each other and with the existing mission data set, indicating that the contractor estimates compare favorably with the costs of other successful missions of similar complexity. As would be expected, the 70 percent point is above the average (designated by the green line), basically representing the 50 percent mean for the data set. Similarly the NASA estimates fall near or below the mean, which is consistent with the S-curve results discussed above. This plot supports the conclusion that the contractor costs are reasonable and represent a realistic 70 percent confidence estimate based on the information provided for the assessment.