Safety and Security Issues



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 39
--> Safety and Security Issues

OCR for page 39
This page in the original is blank.

OCR for page 39
--> Air Traffic Control Modeling KATHRYN T. HEIMERMAN The MITRE Corporation McLean, Virginia Abstract This paper describes how the U.S. national airspace system (NAS) operates today and discusses anticipated changes. Examples are given of recent modeling efforts. Models help NAS stakeholders make better-informed decisions about how to safely implement agreed-upon goals for the next generation of air traffic control equipment and procedures. The models sometimes suggest to decision makers what the goals ought to be. Six fundamental modeling concepts that lie on the modeling frontier and will influence its future directions are discussed. These concepts are suggestions for future research and areas where interdisciplinary contributions would expedite advances at the frontier . Introduction The term modeling spans the spectrum from simple relationships to highly complicated, parallel, fast-time, constructive, ''human-in-the-loop," and discrete-event computer simulations. This presentation emphasizes the frontier in modeling. Toward that end, the most exciting work in modeling and simulation is in the development of fundamental modeling concepts. This paper begins with a summary of how the national airspace system (NAS) operates today and then covers anticipated changes and gives examples of models. Then, concepts at the modeling frontier that will guide its future directions and opportunities for interdisciplinary research are discussed.

OCR for page 39
--> U.S. National Airspace System Operations Both analytical and computer models are critical tools for researchers of the NAS. The reason is that if we wish to conduct tests or deploy new equipment or procedures, we cannot simply halt NAS operations. The NAS operates continuously. Nor can we simply plug in advanced prototype systems for testing during NAS operations because human lives would be at stake should anything go wrong. So we use models. NAS airspace spans all U.S. territories and beyond the continental shelf. The NAS includes all air traffic control (ATC) and traffic management facilities and personnel as well as equipment used for communication, navigation, and surveillance, such as VHF/UHF voice transmitters and receivers, navigation beacons, weather and windshear radars, and instrument landing equipment. The Federal Aviation Administration (FAA) procures, operates, and maintains this equipment. Besides the FAA there are the system users who generate flights, including scheduled passenger and cargo carriers, business jets, the military, and general aviation (recreational and experimental aircraft). The NAS is the largest command, control, and computer system in the world. On a typical day in the United States, over 1.5 million people fly safely aboard some 130,000 flights (Federal Aviation Administration, 1996). The United States maintains a sterling aviation safety record. In economic terms the U.S. civil aviation industry contributes about 5 percent of the annual U.S. gross domestic product, so there are also economic incentives to maintaining a safe and healthy civil aviation industry (Wilbur Smith Associates, 1995). The FAA assures safety via certification of the people, procedures, and equipment that operates and is maintained in the civilian ATC system, and by regulation of the aviation industry. For example, the FAA inspects and certifies equipment airworthiness and skill levels of flight and maintenance crews. Regulations require that while flying under visual flight rules pilots must "see and avoid" to ensure safe separation. Safe separation means ensuring three-dimensional distance separation between all aircraft at all times. FAA regulations similarly require that, while flying under instrument flight rules (e.g., passenger flights), pilots must adhere to an FAA-cleared flight plan. Air carrier dispatchers must maintain positive operational control of flights. Positive operational control means uninterrupted origin-to-destination surveillance, communication, and navigation services for every flight. Meanwhile, an FAA ATC specialist ensures safe separation under instrument flight rules. Starting from gate pushback, the phases of flight include (see Figure 1) taxi, departure, en route transit, approach, land, taxi, and at gate (Nolan, 1994). Correspondingly, ATC control of a flight is passed through a series of controllers: ground, terminal, departure, en route, approach, terminal, and ground. While en route, a flight passes through imaginary chunks of airspace

OCR for page 39
--> FIGURE 1 Phases of flight. Source: The MITRE Corporation.

OCR for page 39
--> and is monitored and controlled by facilities called Air En Route Traffic Control Centers or simply "centers." One additional control facility is the ATC System Command Center. There, traffic management specialists monitor and manipulate traffic flows nationwide so that traffic demand does not overwhelm system capacity. The command center coordinates among FAA centers and air carrier dispatchers to reroute traffic around pockets of bad weather or disrupted airports. In times of extreme congestion or service disruption, specialists are authorized to impose traffic flow initiatives such as ground delay, ground stop, or miles-in-trail (instructing pilots to maintain a particular distance between leading and trailing aircraft). The system that has been described is how the NAS currently operates. However, economic and safety factors are driving change. Essentially, the anticipated changes are contained in the "Free Flight" concept (see Figure 2). Under Free Flight, today's system is expected to evolve toward one with distributed decision making, increased information flows, and shared responsibility. Free Flight's greater planning and trajectory flexibility is expected to reap economic benefits. To illustrate, consider that scheduled air carriers' collective profits were about $2.5 billion in 1996 (Air Transport Association, 1997). Under Free Flight, preliminary research conducted by MITRE and others indicates that scheduled air carriers may reap cost savings on the order of $1 billion annually from known, near-term communication, navigation, surveillance, and air traffic management enhancements. Concern for safety is also driving change. The highest levels of the federal government have articulated a goal to improve safety. The reason why can be shown mathematically. U.S. passenger traffic is forecast to grow by 4 percent per year well into the next millennium (Federal Aviation Administration, 1995; Boeing Commercial Airplane Group Marketing, 1996). In addition, the scheduled air carrier accident rate has averaged 42 accidents per year over the past five years, including both fatal and non-fatal accidents (Federal Aviation Administration, 1997). Compounding the 4 percent annual traffic growth rate and applying the annual accident rate yields a doubling of the annual number of accidents by the year 2014. The only recourse to that unacceptable safety level is to decrease the accident rate by changing the NAS. Modeling Since we cannot simply halt NAS operations, analytical and computer models are critical in developing, testing, and evaluating equipment and procedures that show promise for bringing future NAS goals to fruition. For example, security improvements will come from better weapons detection (Makky, 1997) and passenger screening (National Materials Advisory Board,

OCR for page 39
--> FIGURE 2 Free flight. Source: The MITRE Corporation.

OCR for page 39
--> 1996). Equipment, materials, and procedures to accomplish this are often modeled by Monte Carlo and discrete-event computer simulations. Experimental designs implemented using these models reveal, for example, ways to improve detection error rates. MITRE has in-house, human-in-the-loop, real-time cockpit and ATC console simulators that can even simulate weather conditions. Field operational conditions are set up, and air traffic controllers and/or certificated pilots are asked to participate. Sometimes simulations run under controlled experimental conditions, but they often run under exploratory research conditions. One very well recognized large-scale model developed at MITRE is the Detailed Policy Assessment Tool (DPAT) (MITRE, 1997). DPAT is a constructive, discrete-event, fast-time computer simulation distributed over four Sun Microsystems processors with simulation time synchronized using the Georgia Tech Time Warp product. In about 4 minutes, DPAT can simulate more than 60,000 flights among more than 500 U.S. or international airports. DPAT simulates each flight, computing trajectory, itinerary, and route, as well as runway utilization, system delay and throughput, and other statistics. Our modeling work has spanned a spectrum of logical paradigms, including deterministic and stochastic rules, fuzzy logic, genetic algorithms, simulated annealing, and mathematical programming. The remainder of this section details those efforts. In one case we designed algorithms to model stakeholder responses to NAS traffic flow disruptions (Heimerman, 1996). Disruptions could be caused by severe weather, unanticipated airport closure, or other reasons. Responses are decisions to delay, divert, or reroute flights. The algorithms more realistically simulate runway arrival and departure queues as NAS users and command center specialists manage traffic demand. The model also illuminates critical information flows on which decisions are based. We also constructed a fast-time Monte Carlo model called the Simultaneous Instrument Approach Model. It simulates simultaneous approaches to parallel runways, where there is some probability that a blunder event could occur (see Figure 3). If the blunderer deviates enough, ATC will instruct the evader aircraft to perform a breakout maneuver and go around for a second approach. We designed and verified fuzzy logic algorithms representing the controllers' selection of one of several possible breakout maneuvers. Current modeling effort explores how air carrier dispatchers might respond if given access to information as envisioned under Free Flight. A secure digital data exchange computer network will allow near-real-time exchange of data that have not been shared before. Such a system ought to enable decision makers to shift their attention from mere information exchange to collaborative decision making. We have developed two prototype models to explore the implications of such a hypothesis. One is a linear program that shows the economic value of

OCR for page 39
--> FIGURE 3 Blunder event during simultaneous approaches. Source: The MITRE Corporation. information sharing. It shows that, if dispatchers are given an accurate, timely report of reduced arrival capacity at an airport, they can dispatch more economically prudent arrival streams. A second model is a set of coupled difference equations that represent an iterative competitive marketing game among air carriers. We used it to identify conditions under which one competitor dominates. A controller entity assumes strategic roles such as arbiter, referee, enforcer, or negotiator. The model exploits principles from complex adaptive systems theory and nonlinear dynamics. To support these "data greedy" models, parameter estimates are obtained by conducting field tests. One such field test had as its purpose to examine whether datalinks that passed real-time arrival sequence information from the FAA to scheduled air carriers would improve dispatcher situational awareness, operational cost effectiveness, or collaborative problem resolution and decision making. Versions of this field test are currently under way. Frontier and Interdisciplinary Opportunities Six fundamental modeling concepts are driving the modeling frontier. These concepts are shaped in aviation applications by an uncompromising regard for safety. The safety constraint translates into a requirement for model credibility, which arises from the processes of verification (assuring that computer programs encode a model's conceptual design) and validation (assuring that the model reflects the real system). As one learns in graduate school, (1) a model is a simplified abstraction of the most salient features of a system and (2) the modeling process invokes a blend of mathematical relationships and art (Banks and Carson, 1984). Concept 1: The first fundamental concept driving the frontier is that statements (1) and (2) are coupled to the extent that the art of modeling manifests itself in the ways that different people perceive and define the terms

OCR for page 39
--> salient features and real system. This fact wreaks havoc on the model validation process, a research area where more work should be done. The difficulty occurs when two experienced and knowledgeable individuals cannot agree on a single description of the system to be modeled. As a consequence, there is difficulty determining not only when the modeling effort is finished, but also which of the inconsistent perceptions of reality to use for comparison during the validation process. Heated arguments can ensue, particularly where safety is involved. These matters are exacerbated by the fact that the English words modelers use to try to resolve their differences are vague relative to the absolute requirement for unambiguous computer instructions. Concept 2: Model validation may be additionally confounded when the system of interest is not a real system. For example, we might build a model intended to alert us to never-before-imagined system states, behaviors, processes, or boundaries. So if the model is not supposed to reflect reality, can we know when we have achieved success in modeling? Model validation is not well understood. To those interested, see Oreskes et al. (1994). Concept 3: Though not required by statements (1) and (2), most students are taught a host of modeling techniques that first decompose a problem into component parts, solve those parts, and then aggregate the solutions as a solution to the larger problem. This approach is called "reductionism" and results in a pyramid of component models. This "pyramid scheme" approach is popular because it is the only one known to many modelers and because object structures in modern object-oriented programming correlate well with a system's component parts. In fact, however, reductionism is not a productive approach to modeling those systems that are not simply the sum of their parts. The reason is that examining the components does not recognize their dependencies or interactions over time. For example, in a hierarchy of descriptive variables pertinent to some system, the measurement scales at the top and bottom of the hierarchy are often quite different. Such systems are said to exhibit a nonhomogeneous resolution scale throughout its components. As a consequence, reductionist approaches do not apply. More research to supplement reductionist techniques would be helpful. Concept 4: Another modeling approach that needs to be examined arises from the fact that throughout the history of mathematical modeling we have proactively conserved computational time in order to generate results in a timely manner. For example, we use look-up tables of formula values or estimate functional values by a truncated Taylor's series expansion. We code techniques like these into our models. However, where beneficial and in light of today's high computer speeds and memory capacities, we should reappraise previously abandoned techniques such as exhaustive searches of variable spaces and response surfaces.

OCR for page 39
--> Concept 5: Modelers, as a group, are very good at writing computer code that describes physical phenomena such as six-degree-of-freedom projectile trajectories. Increasingly important, however, are models of cognitive, social, and behavioral phenomena and ways that individual behaviors cause flux in trends and paths traversed by humans collectively. Examples include dynamics such as the public good, group performance, economic influences on decision making, and consequences of political struggles. Concept 6: Related to the preceding point is the concept of modeling how individual people think, process information, and identify the need to reevaluate options or change behavior. Even with results from artificial intelligence, little is known about the links between a decision and the information on which a decision was based. For example, we do not understand "selective attention" in which decision makers turn their attention at points in time to the data that they wish to focus on and discard the remaining data. Social scientists could help us understand human reasoning, but that would not be enough. The question of how to encode these processes in computer programs is an additional matter that needs investigation. Summary Research providing a richer theory about these six modeling concepts would expedite advances at the frontier. Clearly, contributions could come from other disciplines. Meanwhile, the newest ATC models help NAS stakeholders make better informed decisions about how to safely implement agreed-upon goals for the next generation of air traffic control equipment and procedures. The models are even helping us frame what the goals ought to be. This is an exciting and dynamic time for the modeling and simulation community and for civil aviation. References Air Transport Association. 1997. Annual Report. Washington, D.C.: Air Transport Association. Banks, J., and J. S. Carson II. 1984. Discrete-Event System Simulation. Englewood Cliffs, N.J.: Prentice-Hall. Boeing Commercial Airplane Group Marketing. 1996. 1996 Current Market Outlook, World Market Demand and Airplane Supply Requirements. Seattle, Wash.: The Boeing Co. Federal Aviation Administration. 1995. FAA Aviation Forecasts, Fiscal Years 1995–2006. #FAA-APO-95-1. Washington, D.C.: U.S. Department of Transportation. Federal Aviation Administration. 1996. Administrator's Fact Book. Washington, D.C.: Federal Aviation Administration. Federal Aviation Administration. 1997. Aviation Safety Statistical Handbook , Vol. 5, No. 6. Washington, D.C.: Federal Aviation Administration, Safety Data Services Division. Heimerman, K. T. 1996. Algorithms for Modeling Stakeholder Responses to NAS Disruptions. MTR96W56. McLean, Va.: The MITRE Corp.

OCR for page 39
--> FIGURE 7 Results of infrared imaging of a military aircraft (KC-135) corroded wing fasteners. Contour maps of processed thermal data quantify the relative metal-loss volume from intergranular corrosion. In (a) damage varies under 13 wing-panel fasteners as shown. In (b) magnified views of 3 selected fasteners reveals, from left to right, slight, substantial, and moderate metal loss (corrosion) under the aircraft fasteners. Gamma-Ray Nondestructive Assay for Waste Management Before drums of radioactive or mixed (radioactive and hazardous) waste can be properly stored or disposed of, the contents must be known. Hazardous and "nonconforming" materials (such as free liquids and pressurized containers) must be identified, and radioactive sources and strengths must be determined. Opening drums for examination is expensive mainly because of the safety precautions that must be taken. Current nondestructive methods of characterizing waste in sealed drums are often inaccurate and cannot identify nonconforming materi-

OCR for page 39
--> als.13 Additional NDE and NDA techniques are being developed at LLNL (Decman et al., 1996; Roberson et al., 1995a) and elsewhere (see note 11) to analyze closed waste drums accurately and quantitatively. At LLNL we have developed two systems to characterize waste drums. One system uses real-time radiography and CT to nondestructively inspect waste drums (Roberson et al., 1995a). The other uses active and passive computed tomography (A&PCT), a comprehensive and accurate gamma-ray NDA method that can identify all detectable radioisotopes present in a container and measure their activity (Decman et al., 1996). A&PCT may be the only technology that can certify when radioactive or mixed wastes are below the transuranic (TRU) waste threshold, determine if they meet regulations for low-level waste, and quantify TRU wastes for final disposal. Projected minimum-detectable concentrations are expected to be lower than those obtainable with a segmented gamma-ray scanner, one method currently used by the U.S. Department of Energy. Several tests have been made of A&PCT technology on 55-gallon TRU waste drums at LLNL (see Figure 8), Rocky Flats Environmental Technology Site (RFETS), and Idaho National Engineering Laboratory (INEL).14 These drums contained smaller containers with solidified chemical wastes and low-density combustible matrices at LLNL and RFETS, respectively. At INEL lead-lined drums were characterized with combustibles and a very dense sludge drum. In all cases the plutonium radioactivity of the drums ranged from 1 to 70 grams. At LLNL we are measuring the performance of the A&PCT system using controlled experiments of well-known mock-waste drums (Camp et al., 1994; Decman et al., 1996). Results show that the A&PCT technology can determine radioactivity with an accuracy, or closeness to the true value, of approximately 30 percent and a precision, or how reproducible the result is, to better than 5 percent (Martz et al., 1997a). Perhaps the most important future development for this technology is to improve the system's throughput. The current throughput requires about 1 to 2 days per drum using a single detector-based scanner. At LLNL we have designs for upgrading this scanner to multiple detectors for throughputs estimated to be on the order of a few hours per drum (Roberson et al., 1997). Additional research and development efforts include improving the accuracy of the system and developing self-absorption correction methods. Summary This paper provides an overview of some common NDE methods and several examples for the use of different NDE techniques throughout the life cycle of a product. NDE techniques are being used to help determine material properties, design new implants, extend the service life of aircraft, and help dispose of radioactive waste in a safe manner. It is the opinion of this author and

OCR for page 39
--> FIGURE 8 Representative three-dimensionally rendered CT images of an LLNL transuranic-waste drum. (left) High-spatial (2-mm voxels) with no energy resolution x-ray CT image at 4 MeV reveals the relative attenuation caused by the waste matrix. (middle) Low-spatial (50-mm voxels) with high-energy resolution active gamma-ray CT image at 411 keV of the same drum reveals the quantitative attenuation caused by the waste matrix. (right) Low spatial (50-mm voxels) with high-energy resolution passive CT image at 414 keV reveals the location and distribution of radioactive 239 Pu in the drum. A&PCT was used to determine that this drum contained 3 g of weapons-grade Pu. Source: Reprinted with permission from Lawrence Livermore National Laboratory (Roberson et al., 1995b). others that the NDE community needs to work more closely with end users in the life cycle of a product to better incorporate NDE techniques. The NDE community needs to highlight the importance of NDE in the entire life-cycle process of a product by showing real costs savings to the manufacturing community. Future Work All NDE techniques have limitations. Some techniques are limited by physical constraints, while some can be overcome by developing new NDE system components. Examples include brighter sources, higher spatial and contrast resolution and more efficient detectors, higher-precision manipulators/stages, better image reconstruction, and signal and image processing algorithms with faster computers.

OCR for page 39
--> Acknowledgments I want to thank the principal investigators of the projects highlighted in this paper: Diane Chinn—composites durability; Karin Hollerbach and Elaine Ashby—biomechanics and implant analysis; Nancy Del Grande—aircraft inspection; Dwight (Skip) Perkins—visual inspection and S&IP overview, and Graham Thomas—ultrasonics testing overview. I also thank Toby Cordell, Jerry Haskins, and Graham Thomas for several discussions on the application of NDE throughout the life cycle of a product and my secretary, Jane DeAnda, for helping me edit this paper. The LLNL work described here was performed under the auspices of the U.S. Department of Energy, contract no. W-7405-ENG-48. References Azevedo, S. G. 1991. Model-Based Computed Tomography for Nondestructive Evaluation. Ph.D. dissertation. Lawrence Livermore National Laboratory, Livermore, Calif., UCRL-LR-106884, March. Barrett, H. H., and W. Swindel. 1981. Radiological Imaging: Theory of Image Formation, Detection, and Processing, vols. 1 and 2. New York: Academic Press. Bernardi, R. T., and H. E. Martz, Jr. 1995. Nuclear waste drum characterization with 2 MeV X-ray and gamma-ray tomography. Proceedings of the SPIE's 1995 International Symposium on Optical Science, Engineering, and Instrumentation (Vol. 2519), San Diego, Calif., July 13–14. Bossart, P. L., and H. E. Martz. 1996. Visualization software in research environments. Submitted to Engineering Tools for Life-Cycle NDE, Fifth Annual Research Symposium, Norfolk, Va., April, UCRL-JC-122795 Ext. Abs., Lawrence Livermore National Laboratory, Livermore, Calif., March. Camp, D. C., J. Pickering, and H. E. Martz. 1994. Design and construction of a 208-L drum containing representative LLNL transuranic and low-level wastes . Proceedings of the Nondestructive Assay and Nondestructive Examination Waste Characterization Conference, Pocatello, Idaho, February 14–16. Chinn, D. J., P. F. Durbin, G. H. Thomas, and S. E. Groves. 1997. Tracking accelerated aging of composites with ultrasonic attenuation measurements. Pp. 1893–1898 in Proceedings of Review of Progress in Quantitative Nondestructive Evaluation, Vol. 16B, D. O. Thompson and D. E. Chimenti, eds. New York: Plenum Press. Cordell, T. M. 1997. NDE: A full-spectrum technology. Paper presented at the Review of Progress in Quantitative Nondestructive Evaluation, University of San Diego, Calif., July 27-August 1. Decman, D. J., H. E. Martz, G. P. Roberson, and E. Johansson. 1996. NDA via Gamma-ray Active and Passive Computed Tomography. Mixed Waste Focus Area Final Report. Lawrence Livermore National Laboratory, Livermore, Calif., UCRL-ID-125303, November. Del Grande, N. K., K. W. Dolan, P. F. Durbin, and D. E. Perkins. 1995. Emissivity-Corrected Infrared Method for Imaging Anomalous Structural Heat Flows. Patent 5,444,241, August 22. Del Grande, N. K. 1996. Dual band infrared computed tomography: Searching for hidden defects. Science & Technology Review. Lawrence Livermore National Laboratory, Livermore, Calif., UCRL-52000-96-5, May.

OCR for page 39
--> Del Grande, N. K., P. F. Durbin, and D. E. Perkins. 1997. Dual-band infrared computed tomography for quantifying aircraft corrosion damage. Presented at the First Joint DOD/FAA/NASA Conference on Aging Aircraft, Ogden, Utah, July 8–10. Goebbels, K. 1994. Materials Characterization for Process Control and Product Conformity. Boca Raton, Fla.: CRC Press. Herman, G. T. 1980. Image Reconstruction from Projections: The Fundamentals of Computerized Tomography. New York: Academic Press. Hollerbach, K., and A. Hollister. 1996. Computerized prosthetic modeling. Biomechanics (September):31–38. Johansson, E. J., and P. L. Bossart. 1997. Advanced 3-D imaging technologies. Nondestructive Evaluation, H. E. Martz, ed. Livermore, Calif.:Lawrence Livermore National Laboratory, UCRL-ID-125476, February. Kak, A. C., and M. Slaney. 1987. Principles of Computerized Tomographic Imaging. New York: IEEE Press. Krautkrämer, J., and H. Krautkrämer. 1990. Ultrasonic Testing of Materials. Berlin: Springer-Verlag. Levoy, M. 1997. Digitizing the shape and appearance of three-dimensional objects. Pp. 37–46 in Frontiers of Engineering: Reports on Leading Edge Engineering from the 1996 NAE Symposium on Frontiers of Engineering. Washington, D.C.: National Academy Press. Martz, H. E., D. J. Decman, G. P. Roberson, and F. Lévai. 1997a. Gamma-ray scanner systems for nondestructive assay of heterogenous waste barrels. Presented and accepted for publication at the IAEA-sponsored Symposium on International Safeguards, Vienna, Austria, October 13–17. Martz, H. E., C. Logan, J. Haskins, E. Johansson, D. Perkins, J. M. Hernández, D. Schneberk, and K. Dolan. 1997b. Nondestructive Computed Tomography for Pit Inspections. Livermore, Calif.: Lawrence Livermore National Laboratory, UCRL-ID-126257, February. Mascio, L. N., C. M. Logan, and H. E. Martz. 1997. Automated defect detection for large laser optics. Nondestructive Evaluation, H. E. Martz, ed. Livermore, Calif.: Lawrence Livermore National Laboratory, UCRL-ID-125476, February. Mast, J. E., and S. G. Azevedo. 1996. Applications of micropower impulse radar to nondestructive evaluation. Nondestructive Evaluation, H. E. Martz, ed. Livermore, Calif.: Lawrence Livermore National Laboratory, UCRL-ID-122241, February. McGarry, G. 1997. Digital measuring borescope system. Paper presented at the Review of Progress in Quantitative Nondestructive Evaluation , University of San Diego, San Diego, July 27-August 1. Nurre, J. H. 1996. Tailoring surface fit to three dimensional human head scan data. Proceedings of SPIE Symposium on Electronic Imaging: Science and Technology, San Jose, Calif., January. Roberson, G. P., H. E. Martz, J. Haskins, and D. J. Decman 1995a. Waste characterization activities at the Lawrence Livermore National Laboratory. Pp. 966–971 in Nuclear Materials Management, INMM, 36th Annual Meeting Proceedings, Palm Desert, Calif., July 9–12. Roberson, G. P., D. J. Decman, H. E. Martz, E. R. Keto, and E. J. Johansson. 1995b. Nondestructive assay of TRU waste using gamma-ray active and passive computed tomography. Pp. 73–84 in Proceedings of the Nondestructive Assay and Nondestructive Examination Waste Characterization Conference, Salt Lake City, Utah, October 24–26. Roberson, G. P., H. E. Martz, D. C. Camp, D. J. Decman, and E. J. Johansson. 1997. Preliminary A&PCT Multiple Detector Design, Upgrade of a Single HPGe Detector A&PCT System to Multiple Detectors. Livermore, Calif.: Lawrence Livermore National Laboratory, UCRL-ID-128052, June. Russ, J. C. 1995. The Image Processing Handbook, 2d ed. Boca Raton, Fla. : CRC Press.

OCR for page 39
--> NOTES 1.   For further information, see papers in Review of Progress in Quantitative Nondestructive Evaluation, D. O. Thompson and D. E. Chimenti, eds., Plenum Press, New York, vol. 16A&B(1997); vol. 15A&B(1996); vol. 14A&B(1995); vol. 13A&B(1994). 2.   For further information, see papers from symposia sponsored by the Johns Hopkins Uni­versity Center for Nondestructive Evaluation: Proceedings of the 8th International Symposium on Nondestructive Characterization of Materials, June 15-20, 1997, Boulder, Colo.; Proceedings of the 7th International Symposium on Nondestructive Characterization of Materials, June, 1995, Prague, Czech Republic; Proceedings of the 5th International Symposium on Nondestructive Characterization of Materials, May 27-30, 1991, Karulzawa, Japan. 3.   For further information, see Nondestructive Evaluation: A Tool in IJeslgn, Manufacturing and Service, D. E. Bray and R. K. Stanley. 1989. New York: McGraw-Hill. Also, papers in ASNT's Industrial Computed Tomography Conference I1, Topical Conference Paper Summaries, May 13-15, 1996, Iluntsville, Ala.; ASNT's Industrial Computed Tomography Conference I1, Topical Conference Paper Summaries, May 20-24, 1991, San Diego, Calif.; Proceedings of ASNT Topical Conference on Industrial Computerized Tomography, July 25-27, 1989, Seattle, Wash.; Proceedings or ASNT Spring Conference, March 18-22, 1991, Oakland, Calif. 4.   For further information, see papers in "Nondestructive Evaluation," H. E. Martz, ed., Lawrence Livermore National Laboratory, Livermore, Calif., UCRL-ID-119059, February 1995; UCRL-ID-122241, February 1996; UCRL-ID-125476, February 1997. 5.   See IS&T/SPIE's Symposium on Electronic Imaging: Science & Technology, January 28-February 2, 1996, San Jose, Calif. 6.   For further information, see papers from the SPIE International Symposium on Optical Science, Engineering, and Instrumentation, July 27-August 1, 1997, San Diego, Calif. 7.   For further information, see papers in SPIE Proceedings on ‘96 Symposium on Nonde­structive Evaluation Techniques for Aging Infrastructure and Manufacturing, Dec. 3-5, 1996, Scottsdale, Ariz.; SPIE Proceedings on Nondestructive Evaluation of Aging Aircraft, Airports, Aerospace Hardware, and Materials, June 6-8, 1995, Oakland, Calif. 8.   For medical imaging applications, see papers in Proceedings from the 1997 SPIE Medical imaging Conference: Image Processing, Feb. 25-28, 1997, Newport Beach, Calif.; Proceedings from the 1997 SPIE Medical Imaging Conference: Physics of Medical Imaging, Feb. 23-25, 1997, San Jose, Calif.; Proceedings of the IEEE Nuclear Science Symposium and Medical Imaging Conference, Nov. 3-9, 1996, Anaheim, Calif. 9.   This work is being performed under a cooperative research .and development agreement with Boeing. 10.   For further information, see papers by J. Fouke, F. Guilak, M. C. H. van der Meulen, and A. A. Edidin in the Biomechanics section of this book. 11.   The First Joint DOD/FAA/NASA Conference on Aging Aircraft, July 8-10, 1997, Ogden, Utah. 12.   A portion of this work is performed under a Cooperative Research and Development Agreement with Bales Scientific Inc. (BSI), Walnut Creek, CA, in which we adapted LLNL algorithms for their DBIR scanner and thermal image processor. 13.   For further information, see papers in Proceedings of the 5th Nondestructive Assay and Nondestructive Examination Waste Characterization Conference, Salt Lake City, Utah, January 14-16, 1997; Proceedings of the 4th Nondestructive Assay and Nondestructive Examination Waste Characterization Conference, Salt Lake City, Utah, October 24-26, 1995. 14.   Some or these tests were performed in collaboration with BioImaging Research Inc., of Lincolnshire, Illinois, under a Work for Others Agreement. A mobile waste inspection tomography (WIT) trailer was used to acquire this data. The WIT trailer is described in Bernardi and Martz, 1995.

OCR for page 39
--> Challenges of Probabilistic Risk Analysis VICKI M. BIER University of Wisconsin-Madison Madison, Wisconsin The ever-increasing power of technology creates the potential for catastrophic accidents. Because such accidents are rare, though, the database on them is too small for conventional statistics to yield meaningful results. Therefore, sophisticated probabilistic risk analysis (PRA) techniques are critical in estimating the frequency of accidents in complex engineered systems such as nuclear power, aviation, aerospace, and chemical processing. The approach used in PRA is to model a system in terms of its components, stopping where substantial amounts of data are available for most if not all of the key components. Using data to estimate component failure rates, the estimates can then be aggregated according to the PRA model to derive an estimate of accident frequency. The accuracy of the resulting estimate will depend on the accuracy of the PRA model itself, but there are good reasons to believe that the accuracy of PRA models has improved over time. The failure rate estimates needed as input are generally obtained by using Bayesian statistics, owing to the sparsity of data even at the component level. Bayesian methods provide a rigorous way of combining prior knowledge (expressed in the form of ''prior distributions") with observed data to obtain "posterior distributions." A posterior distribution expresses the remaining uncertainty about a failure rate after observing the data. The posteriors for component failure rates are then propagated through the PRA model to yield a distribution for accident frequency. Two major challenges of PRA are (1) the reliance on subjective judgment and (2) the difficulty of accounting for human performance in PRA. These issues are discussed below.

OCR for page 39
--> Subjectivity PRAs generally result in distributions for accident frequencies, and these distributions are based extensively on subjective judgment (i.e., expert opinion), both in structuring the PRA model itself and in quantifying prior distributions for component failure rates. It is now generally accepted that the uncertainties in PRA results are not an artifact of PRA but are characteristic of low-frequency, high-consequence events. Explicitly recognizing these uncertainties should lead to better decisions; however, the subjectivity of PRA results poses larger problems. The use of subjective probability distributions in making individual decisions is theoretically well founded. However, the situation is more complex for societal decisions, which pose significant policy and technical questions. Policy Questions The subjectivity of PRA results has been partially responsible for delays in implementing risk-based approaches to regulation. Regulators recognize that PRA can make it possible to achieve lower risks than the current body of regulations at no greater cost. However, because of the complexity of the facilities being analyzed and of the resulting models, regulators are dependent on risk analyses performed by facility owners/operators, and even validating a PRA is a costly undertaking. Ignoring the possibility of deliberate misrepresentations, the different incentives of a regulator and a licensee (combined with the subjectivity of PRA models) create ample opportunity for results to be "shaded" favorably to licensees. Taking advantage of the opportunity for risk reduction posed by PRA requires careful attention to regulatory incentives and disincentives. In particular, licensees must have incentives to openly disclose information that may support increased or unfavorable risk estimates. Otherwise, licensees whose PRAs reveal unfavorable results will be unlikely to share those results with regulators, and licensees may be discouraged from upgrading their existing PRAs. These issues are currently being addressed by the U.S. Nuclear Regulatory Commission (NRC), which recently formulated draft regulatory guides for risk-informed decision making. Technical Questions In addition to policy questions, reliance on subjective judgment also poses interesting technical questions. In particular, PRA practitioners have sometimes treated the subjectivity of their inputs somewhat cavalierly. Significant guidance exists regarding the elicitation of subjective prior distributions, but this guidance is costly to apply, especially when prior distributions are needed for dozens of

OCR for page 39
--> uncertain quantities. Therefore, it would be desirable to develop less resource-intensive default methods for choosing prior distributions for use in PRA. Similar work has been done in other fields, with attention focused on so-called robust or reference priors. The idea is to let the database speak for itself as much as possible and to avoid selecting priors that may have unduly large influences on the posteriors. While this approach may not work well in PRA because of data sparsity, it would at least seem worthwhile to identify families of priors that are likely to yield unreasonable posteriors. Such research could lead to improved guidance for PRA practitioners and improved credibility of PRA estimates. Human Error and Human Performance Another challenge to the accuracy of PRA is the difficulty of predicting human behavior. In this discussion I will distinguish between human error per se and the effects of organizational factors. Both topics are being addressed by the University of Wisconsin-Madison Center for Human Performance in Complex Systems, which is supported by several major high-technology companies and the NRC. Human Error Many large industrial accidents, including those at Three Mile Island and Chernobyl, were caused in part by human errors. Hence, it is natural to wonder whether such errors are adequately incorporated into PRA. Human errors are conventionally divided into errors of omission and those of commission. Errors of omission are relatively straightforward to model, since they can be explicitly enumerated based on the procedures to be performed. Errors of commission have historically been considered extremely difficult to analyze, because of the infinite variety of possible human actions. More recently, it has been recognized that the vast majority of commission errors fall into a few simple categories. Barring sabotage or insanity, people are unlikely to undertake actions that seem unreasonable at the time. Therefore, most errors of commission reflect factors such as shortcuts, competing goals, or misdiagnoses. While these causes are harder to analyze than errors of omission, the recognition that most errors of commission have a rational basis makes them amenable to analysis, and there have been several pilot studies incorporating this approach. After identifying relevant human errors, their probabilities must be estimated. Progress has been hindered both by the fact that psychologists do not yet know enough about the factors contributing to human error and by the tendency for PRA practitioners to prefer simple engineering-style models of human performance. While engineers are known for their willingness to

OCR for page 39
--> make assumptions in order to get the job done, more empirical knowledge of human error would contribute to better assumptions. Organizational Factors Another issue of concern is the effect of organizational factors on risk. At least for U.S. commercial nuclear power plants, corporate culture has as much effect on risk as plant design. Some such influences are implicitly taken into account in current PRAs (e.g., in plant-specific data), but it is unclear how risk will change if practices change. Moreover, organizational factors may also have numerous unmodeled influences on risk. These issues are difficult to analyze in part because we cannot as yet even reliably quantify corporate culture, let alone identify features conducive to good performance. Despite these difficulties, the PRA community has recently begun to address organizational factors, and the NRC is currently funding research in this area. Summary The state of the art of PRA offers many promising research areas. Interestingly, the engineering basis of PRA seems better established than the input required from other fields. For example, although Bayesian statistical theory is well established, there is room for more work on the implications of alternative prior distributions. More importantly, insights gained from PRA, and the necessity of safely managing complex hazardous systems, should inform the research agendas of social scientists. For example, in the real world errors need not reflect mistakes, but rather may represent people performing well under suboptimal conditions. Thus, broader definitions of "error" and greater attention to context would make some psychological research more relevant. Questions also remain in organizational behavior. For example, clearly both democratic/participatory and autocratic/hierarchical management styles can work well under the right circumstances, but the ingredients needed to make either style work effectively are not yet known. Such issues are often not prominent on the research agendas of social scientists, but I believe there is room for basic social science research with significant practical benefits. Today, PRA is being productively applied to a variety of engineering technologies and is being used more extensively in the regulatory process. Since PRA is here to stay, it is time to develop closer ties with other fields. PRA practitioners stand to learn a lot from related areas of research. Moreover, the practical orientation of PRA can yield insights into the most important issues in high-hazard industries and can contribute to more relevant research agendas in other fields.

OCR for page 39
This page in the original is blank.