National Academies Press: OpenBook

Industrial Methods for the Effective Development and Testing of Defense Systems (2012)

Chapter: 7 Organizational Structures and Related Issues

« Previous: 6 Communication, Resources, and Infrastructure
Suggested Citation:"7 Organizational Structures and Related Issues." National Research Council. 2012. Industrial Methods for the Effective Development and Testing of Defense Systems. Washington, DC: The National Academies Press. doi: 10.17226/13291.
×

7

Organizational Structures
and Related Issues

In this chapter, we focus on two topics: the lack of enforcement of existing U.S. Department of Defense (DOD) guidelines and procedures and the role of the program manager in the acquisition process.

ENFORCEMENT OF DOD DIRECTIVES AND PROCEDURES

Conclusion 10: Many of the critical problems in the U.S. Department of Defense acquisition can be attributed to the lack of enforcement of existing directives and procedures rather than to deficiencies in them or the need for new ones.

Christie (2011) discussed this issue and pointed to several aspects of it:

1.  a lack of discipline in decision making concerning advancement of programs through the defense acquisitions milestone system;

2.  unfortunate incentives that result in overly optimistic initial statements of systems requirements as well as optimism regarding the expeditiousness of development and the costs of production and fielding;

3.  failure to rigorously demonstrate, through empirical testing, the required technological maturity of a component or subsystem before each major milestone decision point;

Suggested Citation:"7 Organizational Structures and Related Issues." National Research Council. 2012. Industrial Methods for the Effective Development and Testing of Defense Systems. Washington, DC: The National Academies Press. doi: 10.17226/13291.
×

4.  failure to first establish and then to carry along event-based strategies—instead employing schedule-based strategies—and failure to use strict pass/fail criteria for each phase of development;

5.  failure to carry out continuous, independent assessments of the effectiveness and suitability of defense systems in development from initial development through the various stages of testing and production, extending to early introduction to the field; and

6.  failure to use feedback loops to inform the broad acquisition community as to when acquisition methods have worked and when they have failed so that all can learn from others’ experiences.

We discuss several of these factors throughout this report.

The following actions, some of which are discussed in the report, can help ameliorate these problems:

•   Competitive prototype development and testing should be a strict prerequisite for any new development program prior to entry into engineering and manufacturing development.

•   Emphasis should be on an event-based strategy, rather than a schedule-based strategy, with meaningful and realistic pass/fail criteria for each stage of development. In particular, systems should not be allowed to proceed to operational testing unless that step is supported by developmental test performance that strongly anticipates that the system will pass; such a determination can be greatly aided through the conduct of a rigorous operational test readiness review.

•   Use of continuous and independent evaluation tracking of each program through the stages of development, testing, and production should be required. These assessments should rely heavily on empirical tests and should focus on those capabilities that were the basis for program approval.

Problems with suitability performance of defense systems are just as widespread, and the Defense Science Board (2008) made the following recommendations for remedying them:

1.  Identify reliability, availability, and maintainability (RAM) requirements during the joint capabilities integration development system process, and “incorporate them in the request for proposal as a mandatory contractual requirement” (p. 6).

2.  When evaluating proposals, evaluate the bidder’s approaches to satisfying RAM requirements.

Suggested Citation:"7 Organizational Structures and Related Issues." National Research Council. 2012. Industrial Methods for the Effective Development and Testing of Defense Systems. Washington, DC: The National Academies Press. doi: 10.17226/13291.
×

3.  Include a robust reliability growth program as a mandatory contractual requirement and document progress during each major program review.

4.  Include a credible reliability assessment as part of the various stages of technical review.

5.  Raise the importance of achieving RAM-related goals in the responsibilities of program managers.

6.  Develop a standard for RAM development and testing.

7.  Increase the available expertise in reliability engineering.

THE ROLE OF A PROGRAM MANAGER

The concept of having a strong project manager, sometimes called a chief engineer, was pioneered by Honda. It was pervasive in Japan as early as the 1980s (Box et al., 1988) and has become a standard practice in the automotive industry in the United States. The program manager is appointed early in an acquisition process, as soon as product feasibility is demonstrated through a successful market study. The program manager’s responsibility covers the entire spectrum, from planning, design, development, and manufacturing to even initial phases of sales and field support.

The organizational structure of the teams and implementation details vary across companies, but there is usually continuity with a few team members going from one phase to be part of the team for the next phase. This practice ensures a smooth transition as well as transfer of knowledge. But the key person is the program manager, who is fully responsible and accountable for all phases of the product realization process. If the system has difficulties in development, such as delays or cost increases, or if the system underperforms when fielded, final responsibility lies with the program manager. A strong program manager has the authority to assemble the right team members from inside the corporation; to hire or contract other needed skills; to approve final designs, requirements, vendors and suppliers; and to set the final schedule. Input from all divisions—including sales, marketing, dealers, and manufacturing plants is actively solicited—but the final decisions are made by the program manager. Other industries, besides automotive, also use a similar concept of having a single person in charge of the entire product realization process.

The same activities occur in DOD programs in the broader context of the acquisition cycle. Every program is managed sequentially through phases, all followed by major milestones in which decision makers

Suggested Citation:"7 Organizational Structures and Related Issues." National Research Council. 2012. Industrial Methods for the Effective Development and Testing of Defense Systems. Washington, DC: The National Academies Press. doi: 10.17226/13291.
×

approve or disapprove of the acquisition strategy before the program moves to the next phase of development.1

For DOD programs, however, there are two people with the title of program manager. One is appointed by the defense contractor and generally remains in charge for an extended period of time. The other is designated by DOD: that person is typically a military officer whose chief responsibility is to manage the system development to the next milestone, but his or her tenure is often shorter than the time span between milestones. Tenures have been lengthening of late, but they are still much shorter than development times. The DOD norm is that after a program manager’s tour is concluded, the person is generally promoted and replaced, and the status of the acquisition program during that person’s tenure is not carefully assessed (as it often is in industry). The short tenure and lack of accountability lead to disincentives. For example, there is no motivation for a program manager to be comprehensive in discovering system defects or design flaws in the early stages of system development. Furthermore, given the turnover, any deficiencies are unlikely to be attributed to the efforts of a specific program manager. This approach is in stark contrast with industry, which has more stability and the right incentive structure, which includes being aggressive about finding system defects as early in system development as possible.2

The panel recognizes the challenges associated with program management and does not expect any significant changes to the present system of short-term rotations of military officers as program managers. Nevertheless, we believe that DOD should explore ways to provide more stability, and thereby accountability, to the project management process. Two possibilities include (1) developing a new civilian position in which a person can serve as deputy to each of the program managers and whose tenure spans a substantial portion of the system development cycle, and (2) appointing a deputy program manager at each milestone with the expectation that the deputy will be promoted to program manager.3 Of course, the problem with the incentive structure for program managers will remain, and it is unclear how they would respond to a civilian or to a deputy.

Regardless of these possibilities, the panel believes that there has to be an independent third-party assessment of ACAT (acquisition category)

image

1See Appendix B for an overview of the defense acquisition process; see U.S. General Accounting Office (1998) for the role of a program manager.

2For an analogous discussion of space systems, see Defense Science Board (2003).

3Bell (2008:277) argues: “On the other hand, PMs and their PMOs have to start taking the long-term or enterprise view. That is, it is not OK to delay the discovery of technical, schedule, or budget problems until a future PM has no choice but to acknowledge them. PMs need to be rewarded for solving problems, not for postponing them.”

Suggested Citation:"7 Organizational Structures and Related Issues." National Research Council. 2012. Industrial Methods for the Effective Development and Testing of Defense Systems. Washington, DC: The National Academies Press. doi: 10.17226/13291.
×

I systems whenever a program manager leaves. This assessment needs to be carried out by personnel who are outside the influence of the services and, in particular, external to the acquisition contract for the program. This assessment would allow for the progress of the system under that program manager to be determined objectively. Moreover, the success of each new program manager should be assessed only on the basis of the status and progress from the point of transition. Such an assessment may also change the incentive structure: each program manager will have an incentive to discover design flaws because the improvement of the system under his or her tenure would now be directly evaluated.

We do not offer any suggestions on how the performance of program managers should be assessed if they failed to discover design flaws and system defects. Also, guidelines would have to be developed on how problems from earlier stages of development—for example, that a system’s performance was not comprehensively tested or discovered flaws were left unaddressed—would affect the assessment of subsequent program managers.

Recommendation 5: The Under Secretary of Defense for Acquisition, Technology, and Logistics should provide for an independent evaluation of the progress of acquisition category I (ACAT I) systems in development when there is a change in program manager. This evaluation should include a review by the Office of the Secretary of Defense (OSD), complemented by independent scientific expertise as needed, to address outstanding technical manufacturing and capability issues, to assess the progress of a defense system under the previous program manager, and to ensure that the new program manager is fully informed of and calibrated to present and likely future OSD concerns.

Clearly, there are many details and challenges associated with developing and implementing this recommendation. These are beyond the panel’s scope and expertise, but we conclude there are systemic problems with the current system of program management that are obstacles to the implementation of efficient practices.

Suggested Citation:"7 Organizational Structures and Related Issues." National Research Council. 2012. Industrial Methods for the Effective Development and Testing of Defense Systems. Washington, DC: The National Academies Press. doi: 10.17226/13291.
×

This page intentionally left blank.

Suggested Citation:"7 Organizational Structures and Related Issues." National Research Council. 2012. Industrial Methods for the Effective Development and Testing of Defense Systems. Washington, DC: The National Academies Press. doi: 10.17226/13291.
×
Page 59
Suggested Citation:"7 Organizational Structures and Related Issues." National Research Council. 2012. Industrial Methods for the Effective Development and Testing of Defense Systems. Washington, DC: The National Academies Press. doi: 10.17226/13291.
×
Page 60
Suggested Citation:"7 Organizational Structures and Related Issues." National Research Council. 2012. Industrial Methods for the Effective Development and Testing of Defense Systems. Washington, DC: The National Academies Press. doi: 10.17226/13291.
×
Page 61
Suggested Citation:"7 Organizational Structures and Related Issues." National Research Council. 2012. Industrial Methods for the Effective Development and Testing of Defense Systems. Washington, DC: The National Academies Press. doi: 10.17226/13291.
×
Page 62
Suggested Citation:"7 Organizational Structures and Related Issues." National Research Council. 2012. Industrial Methods for the Effective Development and Testing of Defense Systems. Washington, DC: The National Academies Press. doi: 10.17226/13291.
×
Page 63
Suggested Citation:"7 Organizational Structures and Related Issues." National Research Council. 2012. Industrial Methods for the Effective Development and Testing of Defense Systems. Washington, DC: The National Academies Press. doi: 10.17226/13291.
×
Page 64
Next: References »
Industrial Methods for the Effective Development and Testing of Defense Systems Get This Book
×
Buy Paperback | $35.00 Buy Ebook | $27.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

During the past decade and a half, the National Research Council, through its Committee on National Statistics, has carried out a number of studies on the application of statistical methods to improve the testing and development of defense systems. These studies were intended to provide advice to the Department of Defense (DOD), which sponsored these studies. The previous studies have been concerned with the role of statistical methods in testing and evaluation, reliability practices, software methods, combining information, and evolutionary acquisition.

Industrial Methods for the Effective Testing and Development of Defense Systems is the latest in a series of studies, and unlike earlier studies, this report identifies current engineering practices that have proved successful in industrial applications for system development and testing. This report explores how developmental and operational testing, modeling and simulation, and related techniques can improve the development and performance of defense systems, particularly techniques that have been shown to be effective in industrial applications and are likely to be useful in defense system development. In addition to the broad issues, the report identifies three specific topics for its focus: finding failure modes earlier, technology maturity, and use of all relevant information for operational assessments.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!