National Academies Press: OpenBook

Outlook for Nuclear Power: Presentations at the Technical Session of the Annual Meeting--November 1, 1979, Washington, D.C. (1980)

Chapter: Nuclear Power Reliability and Safety in Comparison to Other Major Technological Systems

« Previous: Risk and Democracy
Suggested Citation:"Nuclear Power Reliability and Safety in Comparison to Other Major Technological Systems." National Academy of Engineering. 1980. Outlook for Nuclear Power: Presentations at the Technical Session of the Annual Meeting--November 1, 1979, Washington, D.C.. Washington, DC: The National Academies Press. doi: 10.17226/18568.
×
Page 32
Suggested Citation:"Nuclear Power Reliability and Safety in Comparison to Other Major Technological Systems." National Academy of Engineering. 1980. Outlook for Nuclear Power: Presentations at the Technical Session of the Annual Meeting--November 1, 1979, Washington, D.C.. Washington, DC: The National Academies Press. doi: 10.17226/18568.
×
Page 33
Suggested Citation:"Nuclear Power Reliability and Safety in Comparison to Other Major Technological Systems." National Academy of Engineering. 1980. Outlook for Nuclear Power: Presentations at the Technical Session of the Annual Meeting--November 1, 1979, Washington, D.C.. Washington, DC: The National Academies Press. doi: 10.17226/18568.
×
Page 34
Suggested Citation:"Nuclear Power Reliability and Safety in Comparison to Other Major Technological Systems." National Academy of Engineering. 1980. Outlook for Nuclear Power: Presentations at the Technical Session of the Annual Meeting--November 1, 1979, Washington, D.C.. Washington, DC: The National Academies Press. doi: 10.17226/18568.
×
Page 35
Suggested Citation:"Nuclear Power Reliability and Safety in Comparison to Other Major Technological Systems." National Academy of Engineering. 1980. Outlook for Nuclear Power: Presentations at the Technical Session of the Annual Meeting--November 1, 1979, Washington, D.C.. Washington, DC: The National Academies Press. doi: 10.17226/18568.
×
Page 36
Suggested Citation:"Nuclear Power Reliability and Safety in Comparison to Other Major Technological Systems." National Academy of Engineering. 1980. Outlook for Nuclear Power: Presentations at the Technical Session of the Annual Meeting--November 1, 1979, Washington, D.C.. Washington, DC: The National Academies Press. doi: 10.17226/18568.
×
Page 37
Suggested Citation:"Nuclear Power Reliability and Safety in Comparison to Other Major Technological Systems." National Academy of Engineering. 1980. Outlook for Nuclear Power: Presentations at the Technical Session of the Annual Meeting--November 1, 1979, Washington, D.C.. Washington, DC: The National Academies Press. doi: 10.17226/18568.
×
Page 38
Suggested Citation:"Nuclear Power Reliability and Safety in Comparison to Other Major Technological Systems." National Academy of Engineering. 1980. Outlook for Nuclear Power: Presentations at the Technical Session of the Annual Meeting--November 1, 1979, Washington, D.C.. Washington, DC: The National Academies Press. doi: 10.17226/18568.
×
Page 39
Suggested Citation:"Nuclear Power Reliability and Safety in Comparison to Other Major Technological Systems." National Academy of Engineering. 1980. Outlook for Nuclear Power: Presentations at the Technical Session of the Annual Meeting--November 1, 1979, Washington, D.C.. Washington, DC: The National Academies Press. doi: 10.17226/18568.
×
Page 40
Suggested Citation:"Nuclear Power Reliability and Safety in Comparison to Other Major Technological Systems." National Academy of Engineering. 1980. Outlook for Nuclear Power: Presentations at the Technical Session of the Annual Meeting--November 1, 1979, Washington, D.C.. Washington, DC: The National Academies Press. doi: 10.17226/18568.
×
Page 41
Suggested Citation:"Nuclear Power Reliability and Safety in Comparison to Other Major Technological Systems." National Academy of Engineering. 1980. Outlook for Nuclear Power: Presentations at the Technical Session of the Annual Meeting--November 1, 1979, Washington, D.C.. Washington, DC: The National Academies Press. doi: 10.17226/18568.
×
Page 42
Suggested Citation:"Nuclear Power Reliability and Safety in Comparison to Other Major Technological Systems." National Academy of Engineering. 1980. Outlook for Nuclear Power: Presentations at the Technical Session of the Annual Meeting--November 1, 1979, Washington, D.C.. Washington, DC: The National Academies Press. doi: 10.17226/18568.
×
Page 43
Suggested Citation:"Nuclear Power Reliability and Safety in Comparison to Other Major Technological Systems." National Academy of Engineering. 1980. Outlook for Nuclear Power: Presentations at the Technical Session of the Annual Meeting--November 1, 1979, Washington, D.C.. Washington, DC: The National Academies Press. doi: 10.17226/18568.
×
Page 44

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Nuclear Power Reliability and Safety in Comparison to Other Major Technological Systems: Space Program Experience* GEORGE M. LOWf My purpose this morning is to provide an overview of reliability and safety in the space program, as an introduction to subsequent discus- sions on reliability and safety in the nuclear power industry. To begin, let me review my credentials to speak on this subject. I am an aeronautical engineer with 27 years experience in NASA and its predecessor agency, NACA. My entire career—until quite recently when I became associated with RPI—has been in the fields of aeronautics and space, where reliability and safety are always of paramount importance. I know about complex systems and how they are designed, built, and operated. I had hands-on experience in every facet of the business when I became Apollo Spacecraft Project Manager after the Apollo fire. But I do not know about nuclear systems, except in a most superficial way. I will describe how we handled safety and mission success in space- flight, especially in Apollo. But I will not conclude that what we did in Apollo also applies to nuclear power plant safety. That can only be done by those who understand nuclear systems and their operation much better than I do. A moment ago I mentioned the Apollo fire. In a way that fire was our own "Three Mile Island," only the immediate consequence was much worse in that three men died. As a result, however, we had a much better Apollo: There are those who even believe that without the fire we could not (or would not) have done everything that was necessary to make Apollo an eventual success. Much of what I will have to say here this morning reflects the lessons learned from the Apollo fire. I believe that Three Mile Island can have a similar beneficial effect on the nuclear energy program; and I hope that Three Mile Island will be a catalyst to streng- then our nuclear industry, and not to destroy it. I prepared the substance of this paper in May l979, long before the report of the President's Commission on the accident at Three Mile Island (Kemeny report) was issued. Yesterday that report did become available, and I studied it in some detail. I was impressed by the fact that many—perhaps most—of the *Based on testimony before the Committee on Science and Technology, U.S. House of Representatives, May 24, l979. "fGeorge M. Low is President of the Rensselaer Polytechnic Institute. 32

33 commission's findings relate directly to subjects I will cover in my remarks. As a result I now believe that many of the lessons learned in Apollo apply substantively to the safe operation of nuclear power plants. SIMILARITIES AND DIFFERENCES There are many similarities between Apollo and a nuclear system, but there are also many differences. Let me characterize some of both. Both Apollo and a nuclear power plant are very complex high-technology systems. Both involve machinery, substances, and environments that are inherently dangerous to life. Both grew up with safety being of para- mount concern, with the full realization that when the chips are down, safety must come first. Both involve constant interaction and inter- relation between man and machine. It is now quite clear that the Three Mile Island accident involved many complex and interrelated factors: the design of the system and its instrumentation, the reliability of various components, and the quali- fication of the operators. More often than not, a combination of events—rather than a single factor—is also responsible whenever an accident occurs in flight. Apollo safety had many dimensions: our greatest effort went into assuring the safety of the "operators"—the astronauts and the ground crews—for they experienced the greatest exposure; of equal concern, but much more limited in scope, was the safety of the population at large, for the exposure of the public was limited to the launch and reentry phases of flight. By contrast, in nuclear systems, the safety of the public is the safety problem of highest concern. In Apollo, also, we devoted as much emphasis to mission success as we did to safety, because the very existence of the program depended on achieving the objective of reaching the Moon. Yet "mission success" in the nuclear business is taken for granted and becomes an economic fac- tor rather than a safety factor. In Apollo we designed, built, and operated a single system, and that system was under the control of a single set of vendors, suppliers, contractors, and government people. In the nuclear power business there are several reactor suppliers and many different designers, suppliers, and operators of the total system. I believe that this difference is especially important when it comes to the design and operation of the complete system—the plumbing, the piping, the valving, and the electrical controls—and the components used in that total system. In Apollo there was essentially one customer, while in the nuclear power industry there are many. Finally, whereas NASA is a single action-oriented agency, with clear lines of authority, and with individual responsibility assigned at each level of the organization, the same is not true in nuclear energy, where NASA is replaced by a combination of the NRC and the utilities. Because the differences I have just described are significant, some of the elements that were essential in the space program may not bear

34 a direct relationship to nuclear safety. Nevertheless, it may be useful to list how we achieved safety in space—primarily in Apollo. To do this, I will concentrate on two aspects of the space program: design and test; and operations. DESIGN AND TEST Apollo was designed for a specific mission: to land men on the Moon and return them safely to Earth. The design stretched the state of the art, not because we wanted to do that, but because we had to in order to accomplish the goal. We used large quantities of propellants, nearly 3,000 tons of oxygen, hydrogen, and kerosene, and a few more exotic ones; new materials were stretched beyond normal limits and designed for ex- treme light weight; computers and electronic systems were used in novel applications leading the state of the art; automated systems and se- quences were carefully balanced with human operations. The underlying design philosophy was to use redundancy wherever poss- ible, and to provide the simplest possible interconnections among vari- ous systems. Together these made for a very forgiving design: many things could go wrong (and often did) without endangering the mission or the safety of the crew. We recognized that components would fail— statistically there were too many of them for this not to happen—and then designed the system so that a component failure could be tolerated. I should make an important point here: the operators of the system— in our case the astronauts and the flight controllers—were involved in the design from the very beginning. They asked some of the most impor- tant design questions and helped formulate sound design solutions. They placed special emphasis on the design of the instrumentation—the mea- surements—in an effort to provide unambiguous signals for subsequent operations. In that way we were assured that our systems would not fool or confuse the operators at a critical time. We established design standards that all of our systems had to meet and developed rigid procedures to assure that they were met. We allo- cated reliability budgets. We analyzed the design for possible failure modes and effects, sneak circuits (latent electrical paths that can cause unwanted functions to occur), and single-point failures. We placed all changes under the most rigid of controls. Emphasis was on formality and discipline at every step along the way. Manufacturing and assembly were also carried out to exacting stan- dards. Individual parts were bought only if their pedigree was known. We specified how to solder, how to crimp wires, and controlled the pro- cess of plumbing. Every part of the system was known, its manufacture specified, and the people who performed intricate functions were spe- cially tested and certified. The proof of the system came from the test program. Everything was tested: piece parts, components, subassemblies, and complete systems. Parts identical to those to be used in flight were subjected to pre- scribed overstress conditions. In addition, each flight component was acceptance tested to at least the worst case conditions of flight. Environmental testing was performed under simulated conditions of

35 vibration, acoustics, shock, temperature, corrosive contaminants, and many more. We made enormous investments in test facilities so that we could indeed simulate the environments of space, and made sure that all compo- nents were qualified for flight. We made a deliberate decision to have test facilities owned by the government, and to have government people involved in the test program. This had several advantages: the vendors and contractors did not have to invest in duplicate test facilities; there was uniformity in test procedures and specifications; and we had a direct overview of the reliability of critical components and systems. Of course, without standardization and configuration control, the test program would have been meaningless. Components that were flown were identical to those that had been tested. There were no substitutes. Formality, discipline, and rigor were the key words in the test pro- gram. Test specifications were prescribed in advance, test results were audited and certified, all anomalies were reported, and all failures had to be understood and corrective action taken. We established an intricate network to report problems and failures to all involved in the geographically dispersed Apollo system. No failure was too small to report. I remember receiving midnight calls about a test failure at some distant contractor's plant, if that failure might in some way be related to the hardware to be flown on the next flight. In every phase of design, manufacture, test, and operations, we held formal reviews, audits, and inspections. There were dozens of them, and they became a way of life: Preliminary Design Reviews, Critical Design Reviews, Design Certification Reviews, Customer Acceptance Readi- ness Reviews, Flight Readiness Reviews, Launch Readiness Reviews, and Safety Assessment Reviews. In these reviews all failures were reported, and actions taken to resolve them were discussed. All levels of people from contractors and government participated. Formal paperwork was sub- mitted, audited, and approved. Responsibilities and authorities for saying "yes" or "no" were clearly understood. It is important to recognize that these reviews were prescribed and carried out by the people responsible for getting to the Moon. All were highly motivated engineers who wanted to get on with the job. But we organized ourselves in a way to have the right kind of internal checks and balances to assure safety and mission success. With a single excep- tion, we did not have outsiders looking over our shoulders, prescribing what we should do, telling us how to do it. (This does not mean that we didn't call on outsiders for advice—we often did.) At each step along the way, we had to balance risk and gain, we had to make the deci- sions that would allow us to meet our objectives on schedule and within cost, and at the same time be safe and successful. The single exception I just alluded to was the Aerospace Safety Advisory Panel, a group chartered by the Congress to take an outside look at how we were doing. The panel held its own reviews, assured itself that NASA was doing its job, and reported directly to the NASA Administrator as well as the Congress. But I want to emphasize again that, as Apollo Spacecraft Program Manager, I felt fully responsible for the engineering of the spacecraft

36 and for its safety. Although I endorse safety audits and inspections, these can only work as adjuncts to an already safety-conscious organiza- tion. Safety cannot be forced from the outside—it must come from within. OPERATIONS Although safety must be designed into a system, the ultimate responsi- bility for safety is in the hands of the operators. This is why, in manned spaceflight, we insisted upon operator input in the design, and this is also why we placed major emphasis on the selection, qualifica- tions, training, and motivation of the operators. We began with highly motivated people—astronauts, flight controllers, and the launch team. When they came to us, they had the basic knowledge to understand the fundamentals—the physics if you will—of the systems they were going to operate. Almost without exception, all were engi- neers; without exception, all were highly competent. How we selected the astronauts is well known. The ground control teams were selected from among our best engineers and were motivated by the fact that many flight controllers had moved on to top executive positions in NASA. Theirs was not a dead-end job; it was the beginning of an exciting career. Operators spent years learning about the specific systems they were to control, participated in tests and simulations, and knew the workings of their systems oftentimes better than even the designer. They devel- oped the detailed operating procedures and wrote the manuals for normal and emergency conditions. All procedures were worked out in detail in advance, and were con- trolled with the same discipline and formality as was the hardware. Crew procedures, mission rules, and the like were under tight configura- tion control and could only be changed through formal mechanisms. The single most important training device was the simulator. Simu- lators were used to help develop procedures and to train and evaluate all operators—flight and ground crews alike. Simulators have an impor- tant advantage over actual hardware: they can easily be operated out- side the normal envelope. All sorts of off-nominal conditions can be tested. Simulation is a game of "what if." What if a thruster sticks open? What if a battery fails to take a charge? We put some or our best peo- ple to work as simulator operators to try to stump the astronauts and the controllers. Only a fraction of the time was spent simulating a normal mission. Then failure after failure and emergency after emergency were thrown at the operators. They concentrated not on the potentially major disasters, but on the small problems that could lead to such disasters. They learned that, more often than not, it would be a strange combina- tion of events that could lead to a sudden catastrophe. By the time they were done, they had faced almost every conceivable problem and had learned how to handle it. Perhaps the best example of the value of simulation was Apollo l3. A sudden explosion wrecked multiple spacecraft systems when the flight was 200,000 miles from Earth. The flight controllers took over, and

37 pieced together a rescue effort that allowed the crew to return to Earth safely. When it was all over, it was clear that the controllers' detailed understanding of the systems, and their prior simulation of every ele- ment of the return (though never exactly the sequence of events which occurred), prevented what could easily have been a disaster. Organization was especially important for the operational units. Lines of command and control were clearly established well in advance. Every individual knew his responsibilities and his authority. And these were not changed during an emergency. I might mention that the key indi- vidual in all manned flights was the flight director, generally a young man in his early thirties, who had complete authority to act under all conditions. Nobody second-guessed him. The flight director was also a good leader of men. He developed an espirit de corps in his team that I have seldom seen equaled. He made what could have been a dull job (imagine sitting behind a console at 4:00 a.m. during the 84-day skylab mission) an exciting assignment. It can be done with good people, with proper motivation, and with a promis- ing career as a reward. A key ingredient in allowing a tightly-knit organization to function was a free and open flow of information. While command and control fol- lowed clearly established lines, information was available to everybody, not only within NASA, but to the general public as well. This, I be- lieve, was also an important factor in maintaining credibility when the chips were down. I should mention that we had planned, in advance, how best to inform the public in the event of a failure or an accident. Quick and complete reporting of the known facts was the key; speculation beyond the facts was avoided. The flow of information through designated spokesmen was continuous, but those involved in the operation—those who had to solve the problem—were called upon to brief the public generally only after the end of their shift. CONCLUDING REMARKS Since preparing this paper I have read the report of the President's Commission on the accident at Three Mile Island. I was struck by the many areas of overlap between my remarks and the commission's report. Lessons we had learned in Apollo were obviously unknown to the people involved in the design, operation, and management of the Three Mile Island plant. This is not surprising, since the space program and the nuclear industry grew up independently of each other. Yet, there are lessons from Apollo (and other space programs) that obviously could be of considerable benefit to the nuclear power industry. These lessons cover a wide variety of fields and disciplines: systems design, control room design, instrumentation, information display, test- ing, failure reporting, selection and training, simulations, and many more. (I would suggest that the space program also has much to learn from the nuclear industry—after all, most of what has been done to bring nuclear energy to its currrent state of development has been right, and not wrong.)

38 I believe it is essential for our economy (and hence for our very survival) that the nuclear power industry get back on its feet, and quickly. Not only must we continue to operate the existing power plants, but we must also complete those under construction and build more. To do this with acceptable risks, the lessons of Apollo (and those of Three Mile Island, of course) should be considered, and used where they apply. In my view, the best way to do this is to involve people who are experienced in design, operation, and management of space pro- grams in responsible line positions in the nuclear industry. There is no other way to transfer knowledge.

Nuclear Power Reliability and Safety in Comparison to Other Major Technological Systems: Commercial Aircraft Experience WILLIS M. HAWKINS* There was a certain amount of hazard that what George Low and I would talk about would be almost identical. But fortunately, George and I seem to have approached the subject of system safety each in a different way. First, when we try to compare what has been done in the aviation indus- try, particularly in the air transport part of that industry, with what has developed in the nuclear industry, there are many parallels. But I don't propose to present myself as an expert in what the nuclear industry has, or should have, done. I plan to talk only about the air transport industry it- self, something I should know about, and hope that you can draw your own con- clusions as to what of this experience might be applicable elsewhere. If I can see a parallel, I will, of course, suggest its further consideration. One of the first things that I would like to say—and we talked about this earlier this morning—is that our industry was permitted to develop in an entirely different environment than the nuclear industry. When we first began to fly and first began to try air transports, the mood of the country and the mood of the people was that a risk was worth tak- ing if one could see some kind of benefit in the future. The total definition of the benefits now, for all of the things that we are doing technologically, is difficult to come by, and some of the "benefits" are almost as controversial as the technology itself. And so, the suggestion that the engineers lay their hearts in front of the public concerning risk also suggests that the engineers ought to have the privilege of telling people what the benefits are as well. That is, many times, out of our control, but nevertheless it is a responsibility that the technical community will have to pick up if it is necessary to publicly discuss all risks. In any case, with the environment today, it is just possible that we would never have flown at all, and I am grateful, and everyone should be grateful, that the environment then was one of encouragement. There are many things that are different in an airplane compared to a nuclear power plant, but while being different, they still address similar problems. An airplane, once it is airborne, can't stop. So an airplane emergency has to be handled in a different way than an emergency in some- thing that is on the ground. An airplane has to fail "safe," but in doing so it has to remain operational. We call this "fail operational." *Willis M. Hawkins is Senior Vice President, Aircraft, Lockheed Cali- fornia Company. 39

40 In considering nuclear systems, one finds a mixture of both fail- safe and fail-operational—some elements can be shut down and some can't. In other words, part of a reactor has to keep flying, too. And so, the fail-safe and the fail-operational concepts, which involve different technical approaches, are required in both industries. I propose today to summarize some history and tell you about the con- tinually advancing state of the art in safety for air transport. I am going to do this with pieces of the airplane. I will discuss the struc- ture of an airplane, the power that keeps it aloft, all of the systems that make it work, and then, lastly, I will talk about our experiences in the user-certification-design process. Let's start first with the structure. I am pleased when I look back at the history of aviation that the technologists—and in those days, they were mechanics—had an early appreciation for the safety of flight, mostly because they were flying their own airplanes. If you will look at some of the old biplanes that are still flying today, you will find that they all carried what were called multiple flying wires. These are the wires that carry the lift load of the wings, but there were two of them, either one of which would sustain the load. If one of them parted, one could get back on the ground and fix it. This is a "multi- ple path" structure, in today's lingo. It failed safe and operational. When we examine present structures, they are complex and the multi- path principle isn't obvious. Double structure exists throughout all of our modern airplanes, either actually or through excessive design margins. In some cases double layers of metal are used to carry the loads. Incidentally, I don't know how well a fuselage would fulfill a boiler code, but our bookshelf on pressurized structure is about 42 inches long, too. A pressure vessel that contains the passengers, of course, is a boiler-code type of structure. If you pursue this in detail, you will find that there are multiple ways in which we go about ensuring that this pressure is maintained or a failure will not be cata- strophic. You may not know it when you look at the scenery, but you are looking through three layers of transparent material; one of two will carry the pressure of the cabin. Each window is mounted in a separate frame so that a frame failure won't take out both of them. And there is a third one inside, so that casual kids or people with diamond rings won't scratch the glass with potential subsequent failure. This kind of structural philosophy is applied throughout the entire design of the airplane, trying to be sure that future failures are only incidents. So much for the structure. Let's now consider the power that drives the airplane. When we started out, we had only one power plant, and it was obvious reasonably soon that we weren't going to get that power plant to operate reliably enough to make our airplanes into practical, safe flight vehicles. It wasn't too long before we introduced two power plants on an airplane designed to transport people. Actually, in those days, reliability was still pretty grim, and so aircraft with three engines soon showed up. That turned out to be fearfully incon- venient, because the propeller on the south end of the airplane or on the north end of a very large fuselage wasn't very good. And so, quite soon most transports wound up with four engines.

4l Then the jet came into being, and these turbine engines would run between 4 and l0 times the hours between failures that a good recipro- cating engine would run. This suggested that we could work our way back- wards, and we did. The three engines came back into the picture, because it was much more convenient to put three on an airplane with a jet engine and in many cases we went back to two. There is a lesson that isn't obvious in this history. One shouldn't make laws too soon. I am old enough to know that back in the early days of the four-engined air- plane there were some serious discussions about a law that all transport aircraft should have four engines. I would like to suggest that such a law would have been critically limiting to the advances that our tech- nology provided for the public. In dealing with public safety we must be careful about how soon we make laws lest we stifle benefits. Let's now talk about the systems in aircraft. There has been a very quiet revolution going on in this technology. The insides of an air- plane are mighty complicated, and many of the things that have been done in our industry are directly applicable to almost any interactive mech- anism, including nuclear systems. I share George Low's suggestion that somehow the two of us—the two industry groups—should work together to see what we can learn from one another. The concepts of total flight control must certainly be parallel in many respects to total reactor control. I have talked about failing safe and operational, and that is what one has to do in nuclear systems. There are lots of subtleties in these systems that may not be apparent. We have multiple sources of powering systems as well as the airplane itself. We have multiple sources of distributing that power to where it is needed. We have multiple mechanisms to move surfaces on the air- planes (and there are multiple control surfaces), so that element fail- ures can occur and we can still operate safely. There are some booby traps in these systems, too. If an airplane is operating beautifully on only half of the equipment that is aboard, the pilot had better know it. Because with a hidden failure, the next take off may be the equiva- lent of a single-engine airplane instead of a multi-engine airplane. Thus, the signal system that tells about a failure when the airplane doesn't act like there is a failure is just as essential as the the prime system. Of course, when failures occur in an airplane, enroute, there has to be assurance that a further failure can also be handled. This may impose detailed knowledge of obscure backups, and constant training may still not assure one of complete crew familiarity. Thus the industry has developed a very interesting system that could be used elsewhere. We started out calling it the "EE and panic panel." The EE and panic panel gave a warning signal in front of the pilot so that he could not miss it. In addition to warning him, it told him where to look. Elsewhere in the cockpit, or at the engineer's panel, was a much more complete systems diagram with the failure element noted. There are other sys- tems, both installed and in development, in which the flight engineer can call up from the on-board library a diagram of the system as it should be. Thus the engineer can see the difference in a "right" and a "wrong" system and can receive instructions on what he should do about it. The instructions, of course, can be automatic, with indica- tion of the failure or corrective action specifically called up.

42 The complexities of such systems bring in the computer, as George Low has pointed out in the control for space missions. The computer helps not only in emergency situations, but also in many normal operating modes. The same computer function is the basis for the simulators that are universally used today. In the aircraft industry, we have come to use an augmented simulator. We call it "the Iron Bird." It is more than a simulator. It has in it everything that is in an airplane. All of the control systems are there; all of the control pistons are there; all of the power sources are there; all of the electrical lines; all of the hydraulic lines. And the essential support structures are all there. It is an airplane on the ground; it is hooked up to a cockpit that looks just like the cockpit of the airplane; and it, too, works just like the cockpit of the airplane. Everybody involved in the development process can get at that simulator. It is in use day in and day out. One can load it in such a way that improbable accidents can overload the system. Purposely, the system is "flown" for years to find failures before they happen in flight. I think the proper use of the Iron Bird is one of the real contributions that has been made to the safety of flight. Finally, I believe it is pertinent to emphasize helpful elements of the user-certifier-creator relationship. The test programs I have talked about—the loading of test wings, as if operating, until they break; trying to explore the geriatrics of an airplane; the Iron Bird exer- cises—are all shared by the creator, certifier, and user. The airline pilots and the certifiers are in the cockpits telling the creator where he has done it wrong. The airplane maintenance people are all over the Iron Bird and the mock-ups, looking at whether or not they can get at everything for inspection and repair. User and certifier are at the production line—their own inspectors are at the flight line. Thus we have the maintenance and the inspection experts, the user, and the certifier all involved in the complete development of the airplane. It starts the day the designer lays down the general arrangement drawing and a license is requested. The developer-user-certifier all participate in essential system evaluation. They look together at the instrumentation on the airplane. It, too, has to have the same kind of backup systems, and together some interesting rules have been worked out, some as the result of accidents that have bit us and taught us things. For instance, we accept no sig- nal by implication on an airplane. If there is an actuator somewhere that pushes a push rod that turns a belt crank that pushes another push rod and locks a lock, one doesn't put the switch that says that lock is locked at the motor that drives all this mechanism. The signal switch is put at the lock where the hook goes around a pinion it is supposed to be locked to. When the switch says it is locked, it is locked, no matter what has happened to the rest of the mechanism. Accepting signals of events by implication is a dangerous booby trap. It is just like the booby trap of multiple structures, where one can't inspect both structures, and an airplane may fly for years and be lost with just one more failure. The design review process that goes on amongst the creator, the certifier, and the user is a definitive, scheduled operation. It starts at the beginning, and the creator has to respond to suggestions of potential failure as time goes on. And,

43 finally, the development system has to respond to what has happened in flight, even after the airplane has been certificated. There is one characteristic about the certifier in the aircraft indus- try that I would like to emphasize, because I think it is unique and valuable. The certifier in the case of an aircraft, the Federal Avia- tion Administration (FAA), has a responsibility, by charter, to promote civilian flight. The certifier wants to see that airplanes fly. It is the part of the FAA responsibility to keep the airplanes flying safely. The FAA-aircraft developer-user is not an adversary relationship. This has developed some Useful functional management mechanisms, where the certifier reaches into the company and picks an engineer, trains him properly, and endows him with a second hat; this engineer is not only working for the company to design the airplane, but he is also working for the certifier. He can blow the whistle. He is authorized to blow the whistle when he sees something going on that he thinks is detri- mental from a safety standpoint. He is called upon from time to time to do design reviews on what other engineers are doing. The designer has something that no outside certifier could really get. He has knowledge of the airplane and its systems. This seems to me to be of overwhelming importance. In addition to engineers, we have certified inspectors, certified manufacutirng people, certified manufacturing process people, and certified testing specialists. All of these represent the FAA, and they are an important part of the team that certifies the airplane. They are authorized—in fact, directed— to run design reviews. They are, of course, monitored and constantly covered by fulltime FAA personnel who come directly from the certifying agency. This is a good system. It is a healthy system. It puts more real knowledge into the certifier's actions and decisions than he could ever get any other way. The licensing of the people who operate the airplanes is done almost the same way. There are certified pilots who can certify other pilots: the people who are flying every day, instead of every other week. That too is important and is the proper way to fulfill an essential function. Permit me again to emphasize the promotion aspect of the certifying agency. This is good and is certainly not criminal, as has been sug- gested by some FAA critics. We have had some other history in our business that the nuclear industry is experiencing now—they are right in the middle of how to deal with advancing requirements for certification with operating sys- tems developed under different rules. When you look at some of the old airplanes that are certified and flying today, one has to ask, "How can that airplane possibly be certi- fied with what we know today?" You can still buy a ticket on a DC-3. It is a fine, fine airplane. And the reason that it is still flying and still certified is that it has proven that whatever the new rules, however it was certified and whatever is in it compared to the modern airplane, it works, it works reliably, and it has proven that it can maintain its standards in the face of the advanced world. As we look at the earlier things that have been done, let us be sure that we don't turn them all off without solid reasons. The older systems are providing the benefits that they were designed for and

44 these benefits should not be lost without factually based solid rea- soning. What have I said? I hope I have said that aircraft may have been tougher than nuclear power to develop in the early days, but we had a different kind of an environment. We were privileged to take risks without justifying each and every one we took. As a matter of fact, risk wasn't a dirty word in those days. Maybe what we have learned in the process can be of some help. I hope I have emphasized enough the close relationship among the cer- tifier, the user, and the creator all through the concept of the design. I have tried to emphasize that the development and the testing were carried on with the user, the certifier, and the designer all working together, and the system was simulated up to and including the last nuts and bolts before the airplane was first flown. We need to solve the energy problem that is facing us, just like I think we need to keep flying. I hope that nuclear power will get the long-delayed rational support it needs, and I hope that we won't be shamed into progress by some other more progressive nation. If we pool all of our knowledge, I am convinced that we can have all of the nuclear power we need and safely. I am available to help, if I can, and I would love to listen to what the nuclear industry has already done because it might help the airplanes get better.

Next: The Electric Industry's Response to Current Events »
Outlook for Nuclear Power: Presentations at the Technical Session of the Annual Meeting--November 1, 1979, Washington, D.C. Get This Book
×
 Outlook for Nuclear Power: Presentations at the Technical Session of the Annual Meeting--November 1, 1979, Washington, D.C.
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!