A number of risk assessment and uncertainty concepts are used in very different ways by different analysts, so the following definitions are offered to clarify the committee’s use of terms: hazards refer to threats to people and the things they value (such as ecosystems, security, or mission success in space),1risk is the probability that a particular event or activity will result in a specified consequence,2 and uncertainty refers to the lack of knowledge and understanding of the structure of a risk and the connections between the stages of risk evolution. Ideally, if certainty is high, the connections among risk stages can be characterized quantitatively. A full risk assessment should always provide not only estimates of risk but also estimates of the associated uncertainties.3 Uncertainties can arise from different sources, such as data inadequacies, model parameters, or lack of scientific understanding of the phenomena. Evaluating the type and source of uncertainty and the ability to reduce it through further research and experience is an important part of any risk analysis.
Overall risk is typically assessed via a probabilistic risk assessment (PRA). In essence, a PRA attempts to determine the overall risk associated with a particular program or a mission stage by factoring in all known risks, and their corresponding uncertainties, if known. The threat to mission success and human life from the meteoroid and orbital debris (MMOD) environment is one of the risks to be considered within a PRA. In its most general sense, a PRA is a systematic approach to providing quantitative answers to the following fundamental safety questions:
- What can go wrong? (What are the scenarios?)
- How likely is it to happen? (What is the frequency of each scenario?)
- What are its consequences? (i.e., of each scenario)
- What is the uncertainty associated with the state of knowledge regarding these answers?4
1 R. Kates, C. Hohenemser, and J.X. Kasperson, eds., Perilous Progress: Managing the Hazards of Technology, Westview, Boulder, Colo., 1985.
2 Kates et al., Perilous Progress: Managing the Hazards of Technology, 1985.
4 S. Kaplan and B.J. Garrick, On the quantitative definition of risk, Risk Analysis 1:11-37, 1981, available at http://josiah.berkeley.edu/2007Fall/NE275/CourseReader/3.pdf.
Before the Challenger accident in 1986, NASA management did not encourage or seem to understand the use of PRA, as reflected by the accident investigator and Nobel laureate Richard Feynman’s statement, “It appears that there are enormous differences of opinion as to the probability of a failure with loss of vehicle and of human life. The estimates range from roughly 1 in 100 to 1 in 100,000. The higher figures come from the working engineers, and the very low figures from management.”5 After the Columbia shuttle accident, the accident investigation board again urged NASA to enhance its risk analyses.6 This lack of attention to probabilistic risk assessment by NASA management had resulted in the MMOD programs finding it difficult to become part of any overall risk assessment associated with mission design and operations, since there was no agreed upon procedure for doing so. This is less true today: NASA management has become increasingly aware of the necessity for risk management, as reflected in NRC studies concerning MMOD with regard to the space shuttle7 and the International Space Station (ISS).8
The initial goals of NASA’s MMOD efforts were to characterize the risk to humans in space, beginning with NASA’s crewed spacecraft programs, more than 50 years ago.9 The primary tool for characterizing risk has been what could be called a Poisson Consensus Model,10 which has the purpose of consolidating theory, measurements, and assumptions into an average event rate where Poisson statistics apply. This approach requires the integration of various statistical distributions (such as velocity and angle of impact) by techniques that were established early in the MMOD programs.11 The history of these consensus models predates the beginning of the space program,12 and they have since been used and their accuracy improved over the years by the international community.
Over time, NASA’s efforts have expanded to include the characterization of risk to uncrewed spacecraft and the addition of the orbital debris population as another source of risk. This addition quickly led to the conclusion that risk could be reduced by minimizing the growth in the orbital debris population. In addition, just as international interest has increased in minimizing the risk to Earth from natural collisions with comets and asteroids, the NASA MMOD programs have also expanded to minimize the risk to people and assets on the ground from reentering orbital debris. As a result of the 2010 National Space Policy,13 which directs NASA to consider the issues involved in the active removal of large derelict debris from orbit, the goal of minimizing risk on the ground is likely to have increasing priority, and trade-offs between reducing the risks to Earth and the risks to spacecraft in orbit may be required. In addition, the risk posed by MMOD has now expanded to include the possibility of catastrophic damage to a spacecraft resulting from colliding with a tracked object in orbit. Other changes to NASA’s mission could occur in the future, for which NASA may need to be prepared.
The hazard from the MMOD environment represents only one component of the total risk to any system or program. It is up to NASA’s program managers to identify systems critical to their mission and manage the risk to those systems. The responsibilities of MMOD programs include determining the probability of failure of any critical system as a result of being hit by either a meteoroid or orbital debris object. Failure for crewed critical systems is defined as loss of the vehicle or loss of life. In some cases what constitutes failure is obvious, such as the penetration of a pressurized container. In other cases a cause of failure is not as obvious; examples include events that could lead to an electrical failure and be interpreted as such (for example, spraying high-speed ejecta
5 R.P. Feynman, Personal observations on reliability of shuttle, Appendix F in Report of the PRESIDENTIAL COMMISSION on the Space Shuttle Challenger Accident, Volume 2, June 6, 1986, available at http://history.nasa.gov/rogersrep/v2appf.htm.
6 Columbia Accident Investigation Board, History as cause: Columbia and Challenger, Chapter 8 in Columbia Accident Investigation Board Report, Vol. 1, NASA, August 2003, available at http://www.sociology.columbia.edu/pdf-files/vaughan5.pdf
7 National Research Council, Protecting the Space Shuttle from Meteoroids and Orbital Debris, National Academy Press, Washington, D.C., 1997, available at http://www.nap.edu/catalog.php?record_id=5958.
8 National Research Council, Protecting the Space Shuttle from Meteoroids and Orbital Debris, 1997.
9 B.G. Cour-Palais, with the assistance of an ad hoc committee, Meteoroid Environment Model¾1969 (Near-Earth to Lunar Surface), NASA Space Vehicle Design Criteria (Environment), NASA SP-8013, March 1969, available at http://www.spaceflightnews.net/special/sp8000/archive/00000012/01/sp8013.pdf.
10 M. Drouin, G. Parry, J. Lehner, G. Martinez-Guridi, J. LaChance, and T. Wheeler, Guidance on the Treatment of Uncertainties Associated with PRAs in Risk-Informed Decisions Making, NUREG-1855, Vol. 1, U.S. Nuclear Regulatory Commission, March 2009.
11 D. Kessler, A Guide to Using Meteoroid-Environmental Models for Experiment and Spacecraft Design Applications, NASA TND-6596, NASA, March 1972.
12 A.C. Lovell, Meteor Astronomy, Oxford University Press, Oxford, U.K., 1954.
13National Space Policy of the United States of America, June 28, 2010, available at http://www.whitehouse.gov/sites/default/files/national_space_policy_6-28-10.pdf, accessed July 6, 2011.
over a circuit board, severing an electrical wire, or creating a plasma). In these more complex failures, additional hypervelocity testing may be required to determine failure mechanisms.
A solution to reducing failure rates from such collisions can be to add shielding, which sometimes must be customized to minimize the added weight to the spacecraft. Where uncertainty exists, safety factors are sometimes added. In other cases, the risk can be reduced by redundant systems or changes in operations, such as orienting the spacecraft in a direction that minimizes the risk (as has been done for the space shuttle).
It is the responsibility of the MMOD programs, usually through the Hypervelocity Impact Technology Facility, to coordinate with the program managers and offer the best solutions for their mission. The MMOD programs have put together a handbook to aid in selecting spacecraft protection options.14 As stated in the handbook, the definition of “failure” has a significant influence on the resulting risk. For example, failure may be defined as a penetration of a critical item that could lead to either loss of the function of the item, or loss of the crew. Such a definition could lead to a significant amount of hypervelocity testing and shielding development, as was the case for the critical items identified for the ISS.15 Alternatively, the failure criterion could be as simple as the depth of the pits on a window pane that might lead to loss of the window during launch, which was one of the critical items identified for the space shuttle.16 In this case a sufficient amount of hypervelocity testing had already been conducted to identify the frequency with which such pits would occur, and all that was then necessary was to operationally plan to examine the windows after each flight and have enough spare windows on hand as replacements when craters were found that exceeded the critical depth.
The focus on collisions and risk from penetration does not, however, fully cover all of the risks involving orbital debris and interplanetary meteoroids. Other risks are discussed more fully in other chapters. In some cases the difficulty in assessing risk is not the result of poor analysis but is the result of lack of data; for example, as is pointed out in Chapter 4, the risk analysis to be performed is sound but suffers from the lack of measurements in the interplanetary environment.
Finding: NASA’s MMOD risk assessment processes have evolved beyond focusing primarily on the damage to spacecraft from collisions with debris that are too small to track, to incorporating a more complete range of risks. More remains to be accomplished, however, including the need in some cases for more measurements as parameters for risk analyses. As gaps are filled, NASA’s MMOD efforts can progress toward ever more integrative risk assessment in which all sources and types of risk are modeled and assessed.
Recommendation: Although NASA should continue to allocate priority attention and resources to collision risks and conjunction analysis, it should also work toward a broad integrative risk analysis to obtain a probabilistic risk assessment of the overall risks present in the MMOD domain in which all sources of risk can be put in context.
Communication of Information About Uncertainty
NASA’s work on reducing the threat to spacecraft posed by orbital debris and meteoroids faces increasingly challenging problems stemming from the complexity of physical changes in space, changing spacecraft designs, increased international use of space and contributions to debris, and private and public sector initiatives in space. An intrinsic challenge also exists in creating models that fully capture the uncertainties and the phenomena being modeled. Examining the sources of the uncertainties, how to reduce uncertainties, identifying those that cannot
14 E. Christiansen, J. Arnold, A. Davis, D. Lear, J.-C. Liou, F. Lyons, T. Prior, M. Ratliff, S. Ryan, F. Giovane, B. Corsaro, and G. Studor, Handbook for Designing MMOD Protection, NASA TM-2009-214785, NASA, June 2009.
15 E. Christiansen, K. Nagy, D. Lear, and T. Prior, Space station MMOD shielding, Acta Astronautica 65(7-8):921-929, 2009.
16 K. Edelstein, Orbital Impacts and the Space Shuttle Windshield, NASA-TM-110594, NASA, Washington, D.C., 1995.
be significantly reduced or removed over the near term, and estimating the time and effort required to significantly reduce extant uncertainties are also issues that need to be addressed. It must also be understood that natural variabilities in the MMOD environment will prevent uncertainties from being removed entirely, no matter how sophisticated and detailed testing programs and modeling efforts become. No less a problem is how to then communicate uncertainties to decision makers or the public. Integrating uncertainty analysis into decision making related to debris impact risks has progressed and can make additional progress going forward. Since many decisions are made at the mission level, how to effectively communicate uncertainties to people with little formal training in managing uncertainties is a matter of considerable importance. The state of knowledge of how to characterize, catalog, and communicate the range of uncertainties is an evolving area.
A description of uncertainty is critical to guiding future research efforts, as well as communicating to those who may be affected by the risk. There is a considerable literature on uncertainty and risk more generally.17 A recent NRC report also captures some of the science of risk and uncertainty.18 The principles of uncertainty are summarized in Box 5.1 of the present report, and it is recommended that the MMOD programs increase their efforts to adhere to those principles.
Finding: The calculation and communication of information about uncertainty are critical to properly assessing operational alternatives based on calculated risks posed by orbital debris.
Uncertainty in MMOD Modeling
All of the MMOD models contain uncertainties (see, for example, the discussions of uncertainties pertaining to the orbital debris and meteoroid models in Chapters 3 and 4 and BUMPER in Chapter 6). The building of these models requires an examination of uncertainties that result from data, which may result from a number of measurements, tied together with a number of assumptions. In the past, uncertainties that were judged to be large enough to significantly affect spacecraft designs or operations were identified and brought to the attention of the appropriate program manager. This happened in July 1987 when NASA headquarters’ senior staff was briefed on the uncertainty in the orbital debris flux due to a lack of measurements of debris in the size range of interest to the then-planned space station. This briefing led to the current Haystack observation program. In July 1993, a briefing to the Shuttle Program Office about the uncertainty of possible damage to the space shuttle during a predicted Perseid meteor storm led to a delay in the STS-51 launch and the beginning of the current meteoroid program at Marshall Space Flight Center. Consequently, it appears that the MMOD programs effectively deal with uncertainty when that uncertainty is either small enough to be ignored or so large that it is obvious that more data are required. However, this approach may not be sufficient going forward. Inadequate consideration of MMOD uncertainties is becoming more important as the program expands, more data are obtained, and safety requirements become tighter. An increased awareness of uncertainty will also be required to adequately respond to various findings in other chapters of this report.
As discussed elsewhere in this report, there is an opportunity to reduce uncertainties in the environment
17 See, for example M.G. Morgan and M. Herion, Uncertainty: A Guide Toward Dealing with Uncertainties in Quantitative Risk and Policy Analysis, Cambridge University Press, Cambridge, U.K., 1990; National Research Council, Understanding Risk: Informing Decisions in a Democratic Society (P.C. Stern and H.V. Fineberg, eds.), National Academy Press, Washington, D.C., 1996; National Science and Technology Council, Grand Challenges for Disaster Reduction, Washington, D.C., 2005; National Research Council, Science and Decisions: Advancing Risk Assessment, The National Academies Press, Washington, D.C., 2009; National Research Council, Science and Judgment in Risk Assessment, National Academy Press, Washington, D.C., 1994; A.M. Finkel, Confronting Uncertainty in Risk Management: A Guide for Decision Makers, Center for Risk Management, Washington, D.C., 1990; R.E. Kasperson, Coping with deep uncertainty: Challenges for environmental assessment and decision making, pp. 337-348 in Uncertainty: Multi-disciplinary Perspectives on Risk (G. Banner and M. Smithson, eds.), Earthscan, London, U.K., 2008.
18 National Research Council, Science and Decisions: Advancing Risk Assessment, The National Academies Press, Washington, D.C., 2009. Science and Decisions, known as the “Silver Book,” replaced the “Red Book” (National Research Council, Risk Assessment in the Federal Government, National Academy Press, Washington, D.C., 1983).
Recommended Principles for Analysis of Uncertainty and Variability
1. Risk assessments should provide a quantitative, or at least qualitative, description of uncertainty and variability consistent with available data. The information required to conduct detailed uncertainty analyses may not be available in many situations.
2. In addition to characterizing the full population at risk, attention should be directed to vulnerable individuals and subpopulations that may be particularly susceptible or more highly exposed.
3. The depth, extent, and detail of the uncertainty and variability analyses should be commensurate with the importance and nature of the decision to be informed by the risk assessment and with what is valued in a decision. This may best be achieved by early engagement of assessors, managers, and stakeholders in the nature and objectives of the risk assessment and the terms of reference (which must be clearly defined).
4. The risk assessment should compile or otherwise characterize the types, sources, extent, and magnitude of variability and of substantial uncertainty associated with the assessment. To the extent feasible, there should be homologous treatment of uncertainty among the different components of a risk assessment and among different policy options being compared.
5. To maximize public understanding of and participation in risk-related decision making, a risk assessment should explain the basis and the results of the uncertainty analysis with sufficient clarity to be understood by the public and decision makers. The uncertainty assessment should not be a significant source of delay in the release of a risk assessment.
6. Uncertainty and variability should be kept conceptually separate in the risk characterization.
However, there is an additional uncertainty in debris size that cannot be quantified with confidence bars and is likely to be more important; this uncertainty is the result of three assumptions: (1) the distribution of the fragment shapes and composition of test samples used to determine debris size from the RCS is assumed to be representative of the distribution of shapes and composition of debris in orbit (see Chapter 2); (2) those shapes and compositions are then assumed to be adequately described by a single parameter known as a “characteristic length”; and (3) these assumptions carry through to the hypervelocity testing, and the program BUMPER, in which the additional assumption is made that the “characteristic length” of a given piece of debris can be approximated with an aluminum sphere of the same diameter (see Chapter 6). The importance of these last two assumptions can be seen by comparing the results of two studies: under the current set of assumptions, shielding is over-designed;20 but if the debris size had been defined as the mass having a given RCS, and that mass were approximated with an aluminum sphere of the same mass, the shielding would be under-designed.21
However, the assumption that any particular size sphere is an approximation to any assumed or measured distribution of shapes and composition is not supported by an analysis that includes integrating over the distributions of shape and composition.22 Finally, China’s anti-satellite test (see Box 1.2 in Chapter 1) gives reason to question that the assumed distribution of shapes and composition is correct. The area-to-mass ratio of the fragments from
19 E.G. Stansbery, G. Bohannon, C. Pitts, T. Tracy, and J. Stanley, Radar observations of small space debris, Advances in Space Research 13(8):43-48, 1993.
20 J. Williamsen, Review of Space Shuttle Meteoroid/Orbital Debris Critical Risk Assessment Practices, Report No. P-3838, Institute for Defense Analyses, Alexandria, Va., November 2003.
21 B. Cour-Palais, The shape effect of non-spherical projectiles in hypervelocity impacts, International Journal of Impact Engineering 26:129-143, 2001.
22 When this type of analysis is performed to relate characteristic size to RCS, it is heavily weighted toward the more numerous smaller objects. Consequently, if such an analysis were applied to relate impact damage to some characteristic size, it might easily be weighted toward smaller, but higher density iron or aluminum oxide debris objects.
The Haystack Data
Stansbery et al. illustrate with 99 percent confidence bars that when the number of objects passing through the Haystack field of view is large, as it is for the smaller debris, there is very little uncertainty in the average flux for that diameter debris compared to the uncertainty in flux for larger diameters.1 However, “diameter” is not directly measured; radar cross section (RCS) is measured and, consequently, there is an uncertainty in the debris diameter corresponding to a given RCS. In considering Bohannon and Caampued, one would expect the diameter uncertainty to also increase with decreasing flux, given that the distribution of possible RCSs for a given diameter is used in a statistical technique to relate RCS to diameter.2,3 The lack of increasing uncertainty shown in debris diameter indicates there may be a problem with describing the uncertainty in debris diameter. An examination of Bohannon and Caampued4 reveals the probable cause: only an approximate uncertainty is given, which is acknowledged as “depending on the RCS statistics.” After 18 years of observations, those statistics would have reduced both uncertainties considerably. This was illustrated 8 years later at the Third European Conference on Space Debris, when the statistical uncertainty for the smaller, more hazardous size debris had all but disappeared, and assumptions about shape were being tested using radar polarization measurements.5 Shape and mass still remain a significant cause of uncertainty due to incomplete testing and analysis of all assumptions.
1 E.G. Stansbery, G. Bohannon, C. Pitts, T. Tracy, and J. Stanley, Radar observations of small space debris, Advances in Space Research 13(8):43-48, 1993.
2 G. Bohannon and T. Caampued, Debris Size Estimation from Radar Cross Section Data Using Quadratic and Non-Parametric Classifiers, XonTech, Inc. Report No. 930301-BE2198, Van Nuys, Calif., June 1993.
3 G. Bohannon and N. Young, Debris Size Estimation Using Average RCS Measurements, XonTech, Inc. Report No. 930781-BE2247, Van Nuys, Calif., September 1993.
4 G. Bohannon and T. Caampued, Debris Size Estimation from Radar Cross Section Data Using Quadratic and Non-Parametric Classifiers, XonTech, Inc. Report No. 930301-BE2198, Van Nuys, Calif., June 1993;
5 M.J. Matney and E. Stansbery, What are radar observations telling us about the low-Earth orbital debris environment, in Proceeding of the Third European Conference on Space Debris, SP-473, European Space Agency, Paris, France, October 2001.
China’s Fengyun-1C satellite is different from fragments from other known events.23 Consequently, this assumption about the relationship between shape and size needs to be reexamined, possibly leading to new ground tests to obtain representative samples and new RCS calibrations from those samples (for additional discussion on RCS calibrations, see Chapter 2).
The committee noticed a significant gap in identifying uncertainty in the more recent measurements and models, not only in those models describing the environment, but in models like BUMPER describing the risk to the environment (additional details on BUMPER can be found in Chapter 6). The committee asked for, but did not receive, uncertainty analysis, nor did it receive a comparison with model predictions and measurements, especially for those models used to predict the long-term MMOD environment. The uncertainty in these model results are not only from the statistical nature of what is being measured, where that uncertainty can be quantified and integrated into an overall risk assessment, but from assumptions that go into the models. Although the uncertainty in these assumptions cannot be assigned a probability of being correct, they can be altered within the bounds of “reasonable assumptions” to determine the sensitivity of the assumptions to the predicted risk, resulting in a range of possible risks.
Consequences of not following these principles have been identified by other NRC studies (see also, for
23 J.-C. Liou and N. Johnson, Characterization of the catalog Fengyun-1C fragments and their long-term effect on the LEO environment, Advances in Space Research 43(9):1407-1415, 2009.
example, U.S. EPA 200424). These pitfalls also apply to the MMOD programs and include (1) not allowing for optimal weighting of the probabilities and consequential errors; (2) not permitting a reliable comparison of alternative decisions; (3) failing to communicate the range of control options that would be comparable with different assessments of the true state of nature; and (4) precluding the opportunity for identifying research initiatives. Examples of these pitfalls are characteristic of the CARA/COLA programs where there is a significant lack of uncertainty analysis associated with those programs (see Chapter 9).
In general, NASA MMOD programs have embraced some of the principles identified in Box 5.1. However, as both the agency and the MMOD programs mature, it becomes increasingly important to better characterize risk and uncertainty in all aspects of the MMOD problems being addressed. Other issues to be addressed include the sources of the uncertainties, how to reduce uncertainties, identifying those that cannot be significantly reduced or removed over the near term, and estimating the time and effort required to reduce significantly extant uncertainties. Of course, natural variability in the MMOD environment will prevent uncertainties from being removed entirely, no matter how sophisticated and detailed testing programs and modeling efforts become.
It is equally important that uncertainty information continue to be communicated to decision makers or program leaders because they are the ones who will determine how to handle this information within the NASA framework of mission planning and operations. While the calculation and communication of uncertainty information to decision makers, including those who plan space missions, has improved at NASA, it is also apparent that a fully integrated cataloging and assessment of MMOD-related uncertainties does not routinely occur in mission-planning and decision-making activities as noted above, this type of information is typically conveyed to management when the uncertainties are either small enough to be ignored, or large enough to be obvious so that either more data or some sort of corrective action is required. Since many of these decisions appear to be made at the program level, effective communication of uncertainty information both to the public and to the proper management levels is an issue of considerable importance that needs constant reevaluation and oversight.
Recommendation: NASA’s meteoroid and orbital debris programs should increase their efforts to reduce the uncertainty and variability in models through acquisition of measurements (and where necessary, to do testing and analysis) for continually improving assessment of risk and characterization of uncertainty. Together with its MMOD efforts, NASA should continue to advance the agency’s efforts to present information on uncertainty in risk analyses. Special attention should be given to maximizing public understanding of uncertainty analysis through peer-reviewed papers and other publications.
24 Environmental Protection Agency, An Examination of EPA Risk Assessment Principles and Practices, EPA/100/B-04/001, Washington, D.C., March 2004.