National Academies Press: OpenBook

Statistical Issues in Defense Analysis and Testing: Summary of a Workshop (1994)

Chapter: COOPERATION VERSUS ADVOCACY IN DECISION MAKING

« Previous: PITFALLS OF HYPOTHESIS TESTING
Suggested Citation:"COOPERATION VERSUS ADVOCACY IN DECISION MAKING." National Research Council. 1994. Statistical Issues in Defense Analysis and Testing: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/9686.
×

Cooperation Versus Advocacy in Decision Making

Current ideas about quality improvement have been expressed by many, including Deming (1986), and summarized in a recent National Research Council report (Fecso, 1993). Key elements of this philosophy include the need to “emphasize thinking about the entire system rather than individual operations in the system,” to “make decisions based on data, ” and to “emphasize nonhierarchical teamwork for decision making and implementation ” (Fecso, 1993:27). The Society for Industrial and Applied Mathematics recently published a report (Friedman et al., 1992:1) that declared:

A revolution, one led by the rapid increase of computational capabilities and an increasingly quantitative approach to problem solving, is transforming the manufacturing world. The expensive and time-consuming build-test-fix and build-sell-repair cycles are being displaced by quantitative methods, which increase the chances that the product will be built right the first time it is built.

One of the driving principles behind the modern theory of statistical process control is that the commitment to quality must pervade the organization. This principle has been reinforced by the so-called Taguchi revolution and ideas regarding continuous quality improvement. Premised on the simple fact that experimentation is a positive activity, Taguchi methods are built on the careful study of fractional factorial experiments. Discussions at the workshop suggested that such experimental designs could be more effectively used in DoD testing (see the section on experimental design).

Suggested Citation:"COOPERATION VERSUS ADVOCACY IN DECISION MAKING." National Research Council. 1994. Statistical Issues in Defense Analysis and Testing: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/9686.
×

While there are obvious and important differences between the private sector and the Department of Defense, some instructive comparisons emerged from the workshop.

One of the themes was the value of building a more neutral and cooperative decision-making environment. In the DoD setting, an important distinction emerged between the sharing of information to produce more informed decisions and the voluntary cooperation of military services engaged in a zero-sum game—that is, competing for identical missions during a period of declining budgets. Workshop participants believe that better archiving and sharing of information—for example, by constructing a component reliability data base—could be achieved within the current constraints of an inherently adversarial system.

Several participants alluded to incentives for groups within DoD to play an advocacy role in the weapon acquisition process. For example, Meth observed that current reliability test planning and analysis approaches are mostly influenced by programmatic considerations. He believes there is a tendency to put the best face forward in presenting test results. Seglie also worried that models can be used for public relations purposes in making sales pitches. Numerous investigations by the General Accounting Office (GAO) of the acquisitions of weapon systems have been conducted, and some of them similarly document the presence of an advocacy environment. The criticisms acknowledged by Duncan in his opening remarks at the workshop and listed earlier include overly optimistic evaluations of test results, inaccurate reports to Congress, and unrealistic testing. Seglie also pointed out that, between developmental and operational testing, measures of reliability, such as mean time to failure, often differ disturbingly by a factor of 2 or 3.

The costs and inefficiencies associated with advocacy in a manufacturing context also appear in defense analysis and testing. The program manager of a weapon system under development may not seriously contemplate the possibility that the system might be canceled on the basis of operational testing. This orientation is understandable in view of the large expenditures that have already been sunk into development (and can vest interest in production) of an expensive weapon system. There may be an expectation that there will be an opportunity to correct problems uncovered in testing, so that rather than a pass/fail testing regimen, DoD employs a test-fix-test-again cycle that is not (but could be) reflected in the statistical methods used in analysis. Seglie (Appendix B) observed that the program manager and the test manager both lack incentive to delay tests. But premature testing—in particular, operational testing—may actually use more resources to conduct (typically) destructive testing than would be used if operational testing were carried out somewhat later on a better-developed system. Samaniego (1993) expressed this theme more generally when he stressed that it is far

Suggested Citation:"COOPERATION VERSUS ADVOCACY IN DECISION MAKING." National Research Council. 1994. Statistical Issues in Defense Analysis and Testing: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/9686.
×

more effective to build quality into the product than to correct errors after an inferior product has been developed.

Legislative, organizational, and technical barriers would have to be overcome in order to facilitate the sharing of information and to reduce the emphasis placed on clearing intermediate hurdles in the acquisition process. Duncan, in his opening remarks, also noted that development testing and operational testing were separated by an act of Congress in 1983 in the interest of avoiding bias in operational testing and evaluation. (Excepted from this separation is combined operational and developmental testing, which is carried out in the later stages of developmental testing.) Two GAO criticisms also express concern about sharing information across DoD agencies. First, OT &E reports are based primarily on service test reports and hence are not independent of the services. Second, the difference between operational testing and developmental testing is sometimes blurred. Such criticisms may constrain the use of statistical methods that use relevant information from prior experience or from other sources. The question is how, when beneficial, can data be combined without causing known abuses in blurring laboratory and field data.

William Lese (Appendix B) outlined the acquisition process and the interim decisions that are made at a series of milestones. Cost and operational effectiveness analyses are performed to inform these decisions—particularly in granting approval for concept demonstration (milestone 1) and for development (milestone 2). These stages remain distinct from the stage at which a production decision is made (milestone 3) based in part on the results of operational testing. In attempting to link work done in COEAs and in OT&E , Lese noted several challenges:

  1. COEAs and OT&E are executed by different agencies.

  2. Measures of effectiveness and measures of performance are often developed independently of each other.

  3. The proposed operational test environment frequently differs significantly from that assumed in the COEA.

  4. The COEA is “put on the shelf” at milestone 2, and the rationale for deciding whether a system is a cost-effective approach toward meeting the operational requirement is forgotten.

Despite these obstacles, some progress is being made. The DoD issued guidelines (see Appendix C) in March 1992 to link the measures of effectiveness used in both the COEAs and testing and evaluation. The workshop participants suggested that DoD should strive toward a goal of learning the most about a prospective system with the most efficient expenditure of resources. Stephen Vardeman suggested two possible roles for the OT&E office: (1) an end-of-the-line inspection station expected to certify the appropriateness of particular weapon acquisitions or (2) an information-

Suggested Citation:"COOPERATION VERSUS ADVOCACY IN DECISION MAKING." National Research Council. 1994. Statistical Issues in Defense Analysis and Testing: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/9686.
×

gathering arm of a larger system that conceives of, develops, produces, and maintains military equipment in the most cost-efficient manner possible. Under the second description, Vardeman noted that the distinction between the developmental and operational testing offices would disappear.

Vardeman also suggested that an information-gathering office should consider methods for developing “soft edge” measures of military value or utility to be associated with various combinations of levels of equipment performance, deployment, and cost. This idea contrasts with the approach of an end-of-the-line inspection station that thinks in terms of critical thresholds and sharp boundaries between acceptable and unacceptable equipment capabilities. Vardeman also suggested that such an office should treat the various operational testing and evaluation strategies for all acquisitions simultaneously and in terms of their likely joint impact on the overall expected military value.

Other workshop participants similarly questioned the realism and desirability of rigid specifications. Seglie noted that original requirements typically change and therefore should not be regarded as absolute. He suggested that modeling and simulation could be used to characterize the military utility of a given system as a function of its test measure—a measure of effectiveness or performance. Francisco Samaniego noted the relevance of statistical process control in attacking a multitude of quality issues. Both he and Donald Gaver commented on the small amount of comparative experimentation under simultaneous conditions, and Samaniego called for expanded efforts in prototype construction.

Suggested Citation:"COOPERATION VERSUS ADVOCACY IN DECISION MAKING." National Research Council. 1994. Statistical Issues in Defense Analysis and Testing: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/9686.
×
Page 38
Suggested Citation:"COOPERATION VERSUS ADVOCACY IN DECISION MAKING." National Research Council. 1994. Statistical Issues in Defense Analysis and Testing: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/9686.
×
Page 39
Suggested Citation:"COOPERATION VERSUS ADVOCACY IN DECISION MAKING." National Research Council. 1994. Statistical Issues in Defense Analysis and Testing: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/9686.
×
Page 40
Suggested Citation:"COOPERATION VERSUS ADVOCACY IN DECISION MAKING." National Research Council. 1994. Statistical Issues in Defense Analysis and Testing: Summary of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/9686.
×
Page 41
Next: DATA STORAGE AND USE »
Statistical Issues in Defense Analysis and Testing: Summary of a Workshop Get This Book
×
MyNAP members save 10% online.
Login or Register to save!
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!