Click for next page ( 91


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 90
Appendix C The Rocket Development Program As an illustration of the methods Los Alamos National Laboratory is developing to evaluate the nuclear stockpile, consider the develop- ment program for a ballistic missile target for air defense system testing, referred to as the Rocket Development Program (RDP). The over- sight agency for the RDP is the Rocket Development Program Center (RDPC), which is primarily responsible for project management, cost con- trols, and scheduling. Two groups of engineers are responsible for building separate sections of the rocket: one group is building a booster to send the rocket into the upper atmosphere, and the other is designing test payload for the rocket. Several subcontractors and vendors provide parts and sup- port to each of the two primary engineering groups. The RDPC program managers must predict performance and reliabil- ity for a system that is still in the design stages, determine whether the system will operate effectively when flown, and identify early any areas of technical risk. These efforts are complicated by the following facts: the rocket development program is extremely expensive; only one or two are built and flown, and they are usually destroyed in the process; and the engineers are rarely able to salvage subsystems for reuse in further iterations of the program. Because each system flown is unique, there are few direct performance or reliability data available for parts or subsystems on the test rocket. There- fore, the important goals for the program are to collect data to help the air 90

OCR for page 90
R O CKET DEVEL OPMENT PR O GRAM 91 defense systems understand their likely performance against targets, and to fly a trajectory that falls within certain parameters. Accomplishment of both these goals constitutes mission success. Visual representations for the RDP were developed using the concep- tual graph techniques of Sowa (1984), whose approach combines a map- ping to and from natural language with a mapping to logic. A conceptual graph, which consists of concepts and relations connected by arcs, illus- trates a proposition with a finite connected bipartite graph, where concepts represent any entity, attribute, action, state, or event that can be described in natural language; relations detail the roles that each concept plays; and the arcs serve as connectors between the two. Figure C-1 shows a top-level ontology developed with a conceptual graph representation. In this ex- ample, concepts are shown as rectangles, relations as circles, and the rela- tionships between the two as arcs. The ontology captures the basic cogni- tive categories of information about the RDP. Identifying such categories makes it possible to ask questions about a system, even when one is not an expert. In the RDP example illustrated in Figure C-1, the ontology reveals key focus areas, such as: What functions were required in order for a particular mission event to occur? What parts were required for the function to occur? The ontology also differentiates between two stages in the design process: design time, when the engineers are working to plan and build the rocket; and run time, which represents the actual functioning of the rocket during flight. The ontology developed in Figure C-1 is much too high level to di- rectly support quantitative model development. Instead, it guides the elici- tation of expertise necessary to gather the information required for devel- oping quantitative models and metrics. After the ontology is developed, one can begin to develop specific representations for each of the concepts- for example, the parts and functions required to instantiate an event. Once a preliminary representation of the important concepts has been developed, one of the most difficult tasks is operationalizing the evaluation metrics. In order to operationalize metrics such as collecting sufficient data, Dying a correct trajectory, and mission success for the RDP, the analyst meets with the project leaders to identify specific goals for the rocket system, to describe an overview of how the rocket would function, and to find out which contractors were responsible for the major areas of the project. For example, flying a correct trajectory involves reaching apogee between 150 and 160 seconds after launch; and collecting sufficient data requires the forward cameras to operate with less than 10 seconds of data loss.

OCR for page 90
92 . . . ROCKET DEVELOPMENT PROG RAM Z ,_ iu In - | EXPERTS | TEST PROCESS ~ \ ,g, ~ tests ~ | FAILURE | | MODES | 3` 'He | FUNCTIONS | FIGURE C-1 Ontology for RDP. SOURCE: Leishman and McNamara (2002~. ,~3 EN EN ~3 :~ L: -{3: IN, ~3 I mechanical I APPENDIX C RUN SPACE/TIME ~3 it, ,~ Deli ~3 \ During problem definition, a great deal of information is collected from a variety of stakeholders in the program. A number of tools can be used to structure this information. The first goal during model develop- ment for a large, complex system is to develop a qualitative map of the problem space that can be used to develop appropriate quantitative models. This map is often called a knowledge model and can be thought of as an

OCR for page 90
R O CKET DEVEL OPMENT PR O GRAM 93 elaboration of parts of the ontology. Graphical representations are used because most people find them easy to understand and because they are used commonly in many communities (e.g., engineering drawings). In large and complex problems, there are often many communities working on the problem engineers, physicists, chemists, materials scientists, statisticians, tacticians, field commanders and each has its own view of the problem definition and solution. The goal of the development of qualitative maps is to come to a common set of representations that allows everyone to have a shared understanding of the problem to be solved. Two expansions of the initial ontology are given in Figures C-2 and C-3. In the RDP example, the first specific representation discretized the flight-time events required to fly a threat-representative (TR) trajectory (Figure C-21. Once these events were identified explicitly, they could be mapped into their importance for mission success. Subsequently, each event was represented by three diagrams at a finer level: a functional diagram (Figure C-3) that detailed only the functions required for an event; a sub- system-part diagram that broke subsystems into collections of parts; and a modified series parallel diagram that specified the order in which parts of a subsystem work together to perform a function. Figure C-3 identifies two primary functions for TR flight, data collection/vehicle tracking and boosted flight, which are themselves broken into several subfunctions. These subfunctions, in turn, can be further specified by the parts and sub- systems involved in their performance. The diagrams are important be- cause they help identify the dependencies that will have to be represented in the statistical model. Definition of the levels that will be included in the problem must be related to the goals. For example, a decision maker may need only a rough comparison of a new design to the old in order to answer the question, "Is the new one at least as good as the old one?" In this case, it may not be necessary to represent the structure of the two systems down to the parts. The extent of information availability, including data and experts, can dic- tate how levels of detail are identified and chosen for inclusion in the model. For example, if knowledge is completely lacking at a subcomponent level, the problem should be defined and structured at the component level. Once sufficient granularity has been achieved in the qualitative maps of the problem, the translation to quantitative models is possible. Since the qualitative maps are graphical, it is often helpful to develop graphical repre- sentations for the quantitative models as well for example, reliability block diagrams, fault trees, and Bayesian networks. The Bayesian network shown

OCR for page 90
m = e ~ 0 0 0 < = e ~ 0 0 0 m E O + . . _ ~ o / < ~ ~ _ . ~ . ~ 1 ~1 c o ~ - X ~ a. ~ E .- ~ o C o ~ e O O ~ s O ~ ~ .C I\ ~ ~ .. rM ~

OCR for page 90
= I ~ o o ~ O r I O ._ ~ O 0- ~ 1 ~ ~ _ : ~ O _ < ~ ~ I o I _, ~ o. C ~~ = ~ o o ~ o o C o C .e ~ O 0 0 o ~ O 0 0 O - C O ~ 0 O O O ~ 0 e O ~ ~ O Hi 'C a c a\ ~ .. rM

OCR for page 90
96 APPENDIX C in Figure C-4 captures a small part of the quantitative structure for the information about events, functions, and parts needed to quantify the model; Figure C-5 is a more traditional Bayesian network that consists of data and parameters. While the quantitative model is being developed, it is important to examine potential data sources. What data are available to populate the model? Who owns the data? Perhaps most importantly, can the data and the model be used to answer the questions and evaluate the metrics from the problem identification stage of the analysis? One of the features of large and complex problems is the heterogeneity of data sources. Seldom will there be enough designed experimental data to evaluate each metric; conse- quently, additional sources of information must be used, such as computer models, engineering judgment, historical data, and developmental test data. Table C-1 is a sample of the kinds of data available to populate the Bayesian networks shown in Figures C-4 and C-5. The heterogeneity of the data requires statistical methodological devel- opment to integrate the data and achieve appropriate estimates of uncer- tainty. The extensive modeling described in previous sections of this report makes explicit where and how the diverse data sources are being used in TABLE C-1 Data Available for RDP Bayesian Network Engineering Judgment The probability of the motor mount ring failing catastrophically is under . 1 percent. If the motor mount ring fails catastrophically, then the fins and frame fall off the vehicle. There is somewhere between a 5 percent and 10 percent chance that the skin will peel back. If the fins or frame are missing, then the vehicle is unstable. If the skin peels back, then the vehicle is unstable. If the fins warp, then vehicle stability is compromised. Experimental Data There is about a 10 percent chance that the fins will warp during flight. The frame will not fail if loads do not exceed 5,000 psi. Computer Model Simulations indicate that there is a 15 percent chance that flight loads exceed 5,000 psi.

OCR for page 90
R O CKET DEVEL OPMENT PR O GRAM motor Mound \< Ring ~~: \ FIGURE C-4 Bayesian network. ^,( Fins ~ \ ~ ~ ram _ 97 Vehicle Stability ) support of the analysis. The questions for the statistical analysis are stan- dard: What are the appropriate techniques to combine the available data sources? What are the appropriate graphical displays of information? What predictions or inferences must be made to support decisions? For the RDP problem, the Bayesian network had over 2,000 nodes. Once the data had been identified and mapped to the structure, Markov chain Monte Carlo techniques were used to make a variety of estimates. In particular, RDPC was interested in the probability of mission red (com- plete failure), yellow (partial failure), or green (success). Initially, these were estimated to be 15 percent (+ 5 percent), 60 percent (+ 10 percent), and 25 percent (+ 5 percent), respectively. However, estimates were available (with associated uncertainties) for probabilities of success and failure of compo- nents and functions throughout the system, and these were used to deter- mine where further testing could be of value in increasing the probability of mission green and in decreasing the uncertainty of the estimates. Not every performance and reliability assessment requires the careful development of a knowledge model. However, for large, complex systems

OCR for page 90
98 - ~ Frame \` ~ P(Frame ). ( Degradation ~ mu\ ~ P(Frame i,: Missing) - ~ P(MMR y go, ~ APPENDIX C am\ ~ Data J \ Flight Load Data J P(Fins >~ Missing) J it. ~ P(Skin a\ ~ - - _ +~ P(Vehicle ~ Unstable) J - Stability . ~ Compromised)/ ( P(Fins Warp) ~ >(~) FIGURE C-5 Bayesian network with statistical parameters and data. with heterogeneous data sources, the development of a common set of rep- resentations has many advantages: the representations provide a common language for all communities to interact with the problem, they can be used to explicitly identify the heterogeneous data sources, and they show an explicit mapping from the problem to the data to the metrics of interest.