National Academies Press: OpenBook

Statistics, Testing, and Defense Acquisition: Background Papers (1999)

Chapter: Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing

« Previous: Front Matter
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

1
Strategic Information Generation and Transmission: the Evolution of Institutions in DoD Operational Testing

Eric M. Gaier, Logistics Management Institute; and Robert C. Marshall, Pennsylvania State University

1. Introduction

Several important papers in the field of information and uncertainty have focused on strategic information transmission (see, for example, Milgrom, 1981; Crawford and Sobel, 1982; or Green and Stokey, 1980). The majority of this research has taken the form of principal agent games. In general, an agent observes some realization of a random variable which affects the payoff for each player. The agent then strategically signals the principal regarding the underlying realization. In the final stage, the principal takes some action which, in conjunction with the realization of the random variable, determines the payoff for each player. In equilibrium, the principal must take account of any bias in the agent's reporting strategy when determining the optimal action.

We present a model which extends the information transmission literature by allowing for a continuous choice of information quality. This is accomplished by letting the agent determine the probability with which he is able to distinguish one state from its complement. We call this stage of the game test design. In equilibrium, the principal must now account for the agent's selectivity in both the information generation and reporting stages. Thus, we present a model in which information is both strategically generated and strategically conveyed.

Since the preferences of the principal and the agent do not necessarily coincide, the test design and reporting process may be significantly biased in favor of the agent. The principal might choose to exercise some oversight authority in the process. The principal could do this in several ways. He might choose to extend oversight authority during the test design stage.

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

Alternatively, the principal might choose to extend oversight authority during the reporting stage. Our model considers each of these cases. As the main result of the paper, we show that oversight of the test design stage always improves the welfare of the principal while oversight of the test reporting stage may not. In addition, we consider the case in which the principal can extend oversight authority over both test design and test reporting.

We believe that the model describes a wide variety of interesting situations—The promotion of assistant professors in disciplines with exceptionally thin job markets, for example. Individual departments make assessments of candidates and report to the tenure committee. Although the tenure committee makes the final decision, the departments have the necessary expertise to gather the relevant data. Typically the tenure committee establishes the criteria by which individual departments judge the candidates. In the context of our model this is interpreted as oversight of the test design phase. Another interesting application is found in the operational test and evaluation procedures used by the Department of Defense. It is in this context that we develop the model.

The Department of Defense engages in two types of testing throughout the acquisition cycle. The emphasis in developmental testing is on isolating and measuring performance characteristics of individual components of a system. Developmental testing is conducted in a carefully controlled environment by highly trained technical personnel. The emphasis in operational testing, however, is on evaluating the overall capabilities and limitations of the complete system in a realistic operating environment. Operational testing is therefore conducted in a less controlled environment by trained users of the system. It is the role of this type of testing in the acquisition cycle that we investigate below.

The acquisition cycle follows a series of event based decisions called milestones.1 At each milestone a set of criteria must be met in order to proceed with the next phase of acquisition. Operational testing is one of the last stages in this cycle.

When a system is ready for operational testing, the exact details of the test are prepared by the independent test agencies within each Service. Tests must be prepared in accordance with

1  

 The interested reader is urged to see the interim report from the Panel on Statistical Methods for Defense Testing (National Research Council, 1995) for a complete description of the current acquisition cycle.

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

he Test and Evaluation Master Plan (TEMP). which spells out the critical operational issues to be addressed. The TEMP is prepared fairly early in the acquisition cycle but is continuously updated and modified. For major systems, both the TEMP and the operational test plan must receive approval from the Office of Director of Operational Test and Evaluation (DOT&E). This Congressional oversight agency was created in 1983 mainly to oversee the test design process. In this way, Congress is able to extend oversight authority into the test design portion of operational testing. Fairly regularly, resource constraints prevent the testing agencies from addressing all of the critical operational issues. In such cases, testers must determine which issues to address and which to ignore.

The independent test agencies conduct the operational tests and evaluate the results. These evaluations are conveyed directly to the Service Chief who reports the results to the relevant milestone decision authority. In the case of major systems, decision authority rests with the Undersecretary of Defense for Acquisition and Technology who is advised by the Defense Acquisition Board. If the Undersecretary approves the acquisition, a procurement request is included in the Department of Defense budget request submitted to Congress. In addition, independent evaluations of test data are conducted by DOT&E who reports directly to the Secretary of Defense and Congress. In so doing, DOT&E also exercises oversight authority in the reporting process.

The role of operational testing in the acquisition cycle has not always been characterized by the description given above. In fact, the entire procurement process has slowly evolved through a series of reform initiatives. Section 2 provides a brief description of the history of the role of operational testing in the acquisition cycle. We then introduce the model in order to gain insight into this process.

Section 3 provides an overview of the related literature. Section 4 develops the modeling framework and lists the assumptions of our model. In section 5 we introduce several games which are designed to capture the role of operational testing at various points in time. Our results are presented in sections 6 and 7. We conclude with section 8.

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

2. Historical Evolution of OT&E

The Air Force is generally considered to have been the early pioneer in operational testing. As early as May 1941, the Air Force Air Proving Ground Command was involved in the testing of new aircraft designs for possible procurement. Although operational testing in the other Services was soon initiated, the absence of strong oversight from the Department of Defense allowed each Service to develop unique regulations and procedures. Prior to 1970, for example, the Navy relied heavily on the subjective opinions of a few well-qualified officers. Little emphasis was given to the generation of verifiable data. Over the same time period, however, the Air Force had gone to great lengths to define a set of formal procedures and guidelines for the conduct of OT&E. As a result, Air Force testing generally produced objective data but lacked the flexibility to adjust to the specific requirements of individual systems.

Prior to 1971, the organization of OT&E also varied substantially across the Services. Although the Navy's test agency reported directly to the Chief of Naval Operations, the Air Force and Army test agencies were subordinate to lower levels of command. The Air Force and the Army were repeatedly criticized for allowing their testing agencies to report to organizations which were responsible for the development of new systems. Partially in response to these concerns, the Deputy Secretary of Defense directed the military services in February 1971 to designate OT&E field commands independent of the system developers and the eventual users. These agencies were instructed to report directly to the relevant Chief of Staff. Navy testing responsibility continued to reside with the Operational Testing and Evaluation Force (OPTEVFOR), while testing responsibility was assigned to the Air Force Test and Evaluation Command (AFTEC)2 and the Army Operational Test and Evaluation Agency (OTEA).

Prior to 1971 the Department of Defense was not required to convey the results of operational testing to the Congress. In the absence of testing data, Congress generally deferred to DoD expertise on program funding allocations. In addition, Congress was not involved in the

2  

AFTEC has now become the Air Force Operational Test and Evaluation Command (AFOTEC).

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

design or implementation of operational testing. Over this time period, therefore, the Department of Defense was able to exert considerable influence over the status of individual programs.

As part of its continued effort to become more involved in the procurement process, Congress enacted Public Law 92-156 in 1971. This law requires the Department of Defense to report OT&E results to the Congress annually. Armed with these testing results, Congress began to take a more active role in determining which programs to fund and which to terminate. However, the design and conduct of operational testing continued to be the responsibility of the Department of Defense. Although Public Law 92-156 certainly reduced DoD's explicit influence over funding decisions, DoD continued to exert considerable influence over the acquisition process through its choice of operational tests. The model will show how DoD might have altered its testing strategy in light of Congressional involvement.

Over the period 1971 through 1983, Department of Defense testing procedures received strong criticism from Congress and the General Accounting Office (GAO). Many of these complaints focused on a perceived inadequacy in DoD testing. In 1983, for example, GAO determined that reliability and maintainability testing on the Army's Sergeant York Air Defense Gun had been inadequate to support the production decision (U.S. General Accounting Office, 1983). Similarly, the President's 1970 Blue Ribbon Defense Panel concluded that both developmental and operational testing of the Army M-16 rifle had been inadequate (Blue Ribbon Defense Panel, 1970). In 1979, GAO concluded that developmental testing was also inadequate in the case of the joint Air Force/Navy NAVSTAR Global Positioning Systems (GPS) (U.S. General Accounting Office, 1979a). Although such criticisms are certainly not limited to the time frame described above, the model will show in what sense testing might have been perceived as inadequate.3

As a result of allegations such as these, Congress became increasingly more concerned with the planning and conduct of DoD testing in the Department of Defense. The President's Blue Ribbon Panel also recommended the creation of a higher than Service level organization to help give direction to the operational test agencies. In 1983, Congress instructed DoD to create

3  

The Army's Aquila Remotely Piloted Vehicle (U.S. General Accounting Office, 1988a) is an example of a program which was criticized for inadequate testing outside the time period described.

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

the Office of Director of Operational Test and Evaluation to fill this oversight role. DOT&E is headed by a civilian who is appointed by the President and confirmed by the Congress. DOT&E is charged with two primary roles. First, DOT&E is directed to be the principle advisor to the Secretary of Defense regarding OT&E matters. Second, DOT&E is directed to report to Congress on the adequacy of operational testing and the desirability of allowing systems beyond low rate initial production.

In fulfilling these primary roles, DOT&E has assumed several responsibilities. First, DOT&E is responsible for the proscription of policies and procedures for the conduct of OT&E. Second, DOT&E provides advice to the Secretary of Defense and makes recommendations to military departments regarding OT&E in general and on specific aspects of operational testing for major systems. In this regard, operational test plans for major acquisitions require DOT&E approval. Third, DOT&E monitors and reviews the conduct of OT&E by the Services. Fourth, DOT&E is responsible for an independent analysis of the results of OT&E for each major system and must report directly to the Secretary of Defense, the Senate and House Armed Services Committees, and the Senate and House Committees on Appropriation. In each case, DOT&E is directed to analyze the adequacy of operational testing as well as the effectiveness and suitability of the tested system. Fifth, DOT&E is responsible for advising the Secretary of Defense regarding all budgetary and financial matters relating to OT&E.

It is well documented that DOT&E had only a limited impact for the first several years of its existence (U.S. General Accounting Office, 1987). The post of Director remained vacant for nearly two years while the Office continued to be underfunded and understaffed. During this time, DOT&E received criticism for failing to adequately monitor Service operational testing. In addition, the Government Accounting Office determined that DOT&E reports to the Secretary of Defense and the Congress were not composed independently as required by law. In several instances GAO found DOT&E reports which were copied verbatim from Service documents. In the first several years, DOT&E was therefore unable to fulfill one of its major responsibilities.

DOT&E was, however, largely successful in its early attempts to improve test planning and implementation. To this end, DOT&E developed a uniform set of guidelines for Service operational testing and revised Department of Defense Directive 5000.3 Test and Evaluation. In

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

1987, GAO determined that DOT&E had significantly impacted the testing process though its careful review of operational test plans (U.S. General Accounting Office, 1987). On many occasions, the Services were required to make significant revisions in operational test plans for major acquisitions in order to get DOT&E approval. GAO concluded that the adequacy of operational testing was significantly improved by DOT&E's efforts in this regard. Our model will yield considerable insight into DOT&E's decision to reform the test planning process at the expense of ignoring the reporting process.

Since the formation of DOT&E, the Department of Defense has faced renewed criticism. The Government Accounting Office and the DoD Inspector General have accused DoD officials of manipulating test results to yield the most favorable interpretation possible. The most highly publicized case involved the Navy's Airborne Self-Protection Jammer (ASPJ) (U.S. General Accounting Office, 1992). The specific allegations stemmed from the reporting of reliability growth test results which were being conducted as part of Initial Operational Test and Evaluation. After testing had begun, Navy testers changed the testing criteria to exclude certain self-diagnostic software failures as not relevant. With these failures excluded ASPJ, was reported to have passed the test criteria. However, the inclusion of this data would have resulted in a test failure. Similar allegations have been levied against other programs including the various electronic countermeasures programs of the 1980s (U.S. General Accounting Office, 1989, 1991 b, 1991 c) and the Army's Air Defense Antitank Systems (ADATS) (U.S. General Accounting Office, 1991a, 1990a). Although criticisms of the reporting process are not limited to the time period described, the model will yield considerable insight into this reporting phenomenon.4

In response to allegations such as these, DOT&E has concentrated additional efforts toward oversight of the test reporting process. DOT&E officials have begun to monitor the progress of operational testing on site. In addition, DOT&E officials currently conduct independent evaluations of operational test results. These evaluations are drawn directly from the raw test data and are not subject to DoD interpretation. DOT&E reports directly to the

4  

See any of the following GAO publications for additional criticisms of the reporting process (U.S. General Accounting Office, 1979b, 1980, 1988b).

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

Congress. If DoD disagrees with any of the conclusions reached by DOT&E, it may append the report to Congress with its own comments.

3. Related Literature

An important avenue of research on the topic of information transmission was initiated by Milgrom (1981). As an application of more general theorems regarding the monotone likelihood ratio property (MLRP), Milgrom introduces games of persuasion. In a persuasion game an interested party (agent) possesses private information regarding the underlying state of nature and attempts to influence a decision maker (principal) by selectively providing data. For example, the agent might be a salesman who has information regarding the quality of his product and selectively conveys a subset of the data to a consumer. In equilibrium, the consumer accounts for the salesman's selectivity in reaching a consumption decision.

By assumption, the agent is unable (or unwilling because of infinite penalties) to communicate reports which are incorrect. Matthews and Postlewaite (1985) have described this assumption as the imposition of effective antifraud regulations. In light of these antifraud regulations, reports from the agent are limited to supersets of the truth. The salesman may, for example, claim that the product meets or exceeds some criteria if and only if the criteria is satisfied. At the discretion of the agent, however, the report may range from entirely uninformative to absolutely precise.

Milgrom shows that a Nash equilibrium always exists in which the principal resolves to ignore all reports and the agent makes only uninformative reports. However, a proposition demonstrates that every sequential equilibrium (Kreps and Wilson, 1982) of the persuasion game involves precise revelation of the truth by the agent. At the sequential equilibrium, the principal believes that any information withheld by the agent is extremely unfavorable. In the face of such extreme skepticism the agent's best response is truthful revelation.

Matthews and Postlewaite (1985) extend Milgrom's model by adding an earlier stage in which the agent chooses whether or not to become informed. They assume that the cost of

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

acquiring information is zero. In this context, they distinguish between mandatory disclosure and antifraud regulations. Under mandatory disclosure, an agent must disclose whether or not he has acquired information. Mandatory disclosure does not, however, require the truthful conveyance of information acquired. Truthful reporting of information is still governed by antifraud. Matthews and Postlewaite assume effective antifraud throughout the paper but consider variations of the model with mandatory disclosure and without.

Using the solution concept of sequential equilibrium, Matthews and Postlewaite examine the dependence of information acquisition upon disclosure rules. They show that the agent will acquire and fully disclose information whenever disclosure is not mandatory. When disclosure is mandatory, however, the agent may or may not acquire information. Note that in the presence of antifraud, agents who do not acquire information must report total ignorance to avoid any chance of misrepresenting the truth. In the absence of mandatory disclosure, the sequential equilibrium calls for the principal to adopt extreme skepticism toward any report of ignorance. In the face of such extreme skepticism, agents choose to acquire information and fully reveal.

The extreme skepticism on the part of the principal completely unravels any possible equilibrium claim of ignorance by the agent. Results such as these have been termed unraveling results. Avoiding this unraveling requires some type of credibility for claims of ignorance by the agent. In the context of their model, mandatory disclosure provides this credibility and impedes the unraveling.

Shavell (1994) extends the model of Matthews and Postlewaite in several important directions. Shavell allows the cost of acquiring information to be privately held by the agents. Shavell also considers cases in which the information acquired may be socially valuable. Socially valuable information increases the underlying value of the exchange between the agent and the principal. As in Matthews and Postlewaite, Shavell assumes effective antifraud and analyzes the impact of mandatory disclosure.

Shavell shows that unraveling may be impeded even in the absence of mandatory disclosure. At the sequential equilibrium, two types of agents claim ignorance. The first type have realized cost draws which exceed the expected value of acquiring information. They are truly ignorant. The second type have acquired information which was so unfavorable that they

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

achieve a higher payoff by claiming ignorance. In equilibrium, the principal simply assigns the appropriate probability to each type when computing his reservation value for exchanges with agents claiming ignorance. Unraveling is also impeded when the information acquired is socially valuable.

In short, the privacy of the cost draw gives credibility to the claims of ignorance by the agents. This credibility is enough to preclude the unraveling effect. Such a result is in stark contrast with Matthews and Postlewaite. This contrast highlights the critical importance of the assumption regarding the distribution of costs. When the cost distribution is not degenerate, the unraveling effect is impeded and the principal must give credibility to claims of ignorance.5 However, as the cost distribution becomes degenerate the principal's skepticism completely unravels any claim of ignorance by the agent. In Matthews and Postlewaite, therefore, it is not the assumption that the costs of acquiring information are zero which drives the unraveling result. Rather it is the degeneracy of the cost distribution.

Jovanovic (1982) reaches a similar conclusion by imposing privately known costs of conveying information upon the agent. It seems clear that some private information on the part of the agent is required to avoid the unraveling effect.

Kofman and Lawarrée (1993) present a variant in which the agent takes an action which partially determines the state of nature. Although the state of nature is revealed to the principal, the action taken by the agent is not observed. In this context, the principal may employ an internal auditor to gather more accurate information regarding the agent's action. The model allows for the possibility that the internal auditor may be involved in a collusive agreement with the agent. In equilibrium, however, collusion is stymied by bounty hunter contracts in which the principal gives any penalty extracted from the agent directly to the auditor.

Kofman and Lawarrée also consider the case in which an external auditor may be employed. The external auditor does not have the possibility of colluding with the agent, but lacks the expertise to gather data as accurately as the internal auditor. A proposition determines the conditions under which the principal will use the internal auditor, the external auditor, or

5  

In this context, degeneracy requires only a support for the cost distribution which does not include the value of acquiring information.

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

both. Although they do not elaborate, Kofman and Lawarrée indicate that the model is consistent with the relationship between Congress and the Department of Defense. Perhaps DOT&E would play the role of the external auditor and the Service test agencies would play the role of the internal auditor.

Crawford and Sobel (1982) take an entirely different approach to games of information transmission. In their model, the preferences of the two parties are somewhat aligned. Crawford and Sobel completely relax the antifraud assumption to allow for a type of cheap talk communication. Although equilibrium messages will not necessarily involve full disclosure, they show that antifraud is not violated at equilibrium.

Crawford and Sobel show that all the Bayesian Nash equilibria are partition equilibria. In a partition equilibria, the agent introduces noise into his report by partitioning the state space and reporting only the partition in which the realization lies. The size of the individual partitions varies directly with the proximity of the parties preferences. For identical preferences, the partitions will be infinitely small and the report will be precise. As preferences differ, the partitions grow in size and the agent attempts to pool over larger and larger realizations. If preferences are suitably different, the agent partitions the state space into a single partition which amounts to a claim of ignorance.

Crawford and Sobel show that if the preferences of the parties do not coincide, the equilibrium number of partitions is always finite. Thus information is never perfectly revealed. Such a result is also in sharp contrast with the results from Milgrom and Matthews and Postlewaite.

Green and Stokey (1980) consider a similar game from an alternate perspective. The preferences of the parties are held constant while the information structure itself is varied. Green and Stokey demonstrate that a more informative information structure does not necessarily imply higher welfare for the parties.6 Examples are constructed in which the welfare of each party is either reduced or enhanced by improvements in the information structure. In addition, Green and Stokey identify several types of equilibria including partition equilibria. For the purpose of

6  

One information structure is said to be more informative than another if it provides higher expected utility for a decision maker regardless of the utility function. See Hirshleifer and Riley (1992) for a complete discussion.

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

comparative statics, focus is given to the partition equilibria. It is shown that the agent will always prefer small improvements in the information structure at a partition equilibria, while the principal may not.

4. The Modeling Framework

The model contains three economic agents. Congress plays the role of the principal while the Department of Defense plays the role of the agent. We assume that DOT&E is a perfect agent of Congress. Thus, there are effectively two players: Congress and DoD.

There are three possible states regarding an individual program: A, B, and C. Nature determines the state of the program according to the probabilities PA, PB, and PC, respectively. We assume that these probabilities are the pretesting beliefs of all participants and are common knowledge. In addition, we assume that the states of the world are mutually exclusive and exhaustive. That is, and for all i = A, B, C.

The testing of a system reveals an information partition which is a superset of the true state. Information partitions may range from very fine, as when a single state is uniquely identified, to very coarse. Let denote the payoff to DoD when testing reveals information partition and the system is procured. For example, RA is the payoff to DoD when a state A system is procured and RAB is the payoff when an information partition (A,B) system is procured. Similarly, let denote the payoff to Congress when testing reveals partition and the system is procured. If a system is not procured both parties are assumed to receive zero payoff.

Assumption 1 We make the following assumptions regarding the payoffs :

for all information partitions.

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

e. All payoffs are common knowledge to the participants.

DoD will choose to proceed with any program that is not state A, while Congress will choose to proceed with information partitions (B,C) and C only. Clearly, the disagreement is centered on state B. Effectively, we assume that DoD would proceed with any program Congress would approve, but not the converse. These assumptions appear to be consistent with the majority of the historical disagreements between Congress and DoD. From time to time, however, Congress has approved funding for programs which DoD wished to terminate. An example of a such a program can be found in the Navy's V-22 Osprey (U.S. General Accounting Office, 1990b). In its current form, our model cannot explain programs such as this.

We give the following definition of socially valuable information:

Definition 1 Information is said to be socially valuable if the conditional expected value within a given information partition exceeds the actual payoff for that partition.

If, for example, , then information is said to be socially valuable. If information is not socially valuable—i.e., information does not change the way DoD behaves—then the previous statement is characterized by equality. The idea is that RAB, in this case, is really a reduced form. Thus, the conditional expected value within a given information partition must always be at least as large as the actual payoff.

Intuitively, socially valuable information would be appropriate if knowing more precise information allowed DoD to adopt a better procurement strategy. For example, finer information might allow the technicians to make small changes in the design which might enhance the value of the overall program.

The total testing budget may be allocated over two types of tests. Let tA denote resources devoted toward distinguishing state A from its complement. We term this type of testing type A testing. Similarly, let tC denote resources devoted toward distinguishing state C from its complement. We term this type of testing type C testing. We assume that the total resources available for testing are exogenously given as T. Thus, is a constraint on the test design process.

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

Let ZA(tA) denote the actual probability of distinguishing state A from its complement as a function of the type A testing resources. Similarly let ZC(tC) denote the probability of distinguishing state C from its complement.

Assumption 2 We make the following assumptions regarding the testing technology and test information::

f. Once the relevant player selects tA and tC, they are common knowledge. is common knowledge for i = A, C.

Intuitively, we assume that additional resources increase the probability of distinguishing between a state and its complement at a decreasing rate. Furthermore, there are no learning spillovers between type A testing and type C. These assumptions will allow for the possibility of interior solutions in test resource allocation.

5. Decision Problems and Games

This section poses several decision problems and games which, we argue, are consistent with various time periods in the history of operational testing.

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

Decision Problem 1

We begin by considering the decision problem faced by DoD in the absence of any Congressional oversight. We analyze a two stage decision problem. DoD must first determine an allocation for the total testing budget T. Then, after an information partition is revealed, DoD must decide to continue or terminate the program. Formally, we represent the decision problem as follows:

Stage 1 DoD determines the allocation of test resources tA and tC.

Stage 2 Nature reveals an information partition to DoD.

Stage 3 DoD continues or terminates the program.

This decision problem is consistent with the procurement process prior to Public Law 92-156. Recall that this law required DoD to begin reporting operational test results to Congress. As described in section 2, DoD exercised considerable influence over the entire procurement cycle during this time period.

According to assumption 1, DoD will proceed with any program that is not state A. In light of these stage 3 preferences, the objective function for stage 1 can be expressed by equation 5.1 below:

Equation 5.1 can be easily interpreted. With probability , the completely uninformative partition is revealed. In that case, DoD would continue the program and receive benefit RABC. With probability , DoD can only distinguish state A from its complement. With probability (1 -PA) then, partition (B,C) is revealed and a payoff

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

of RBC is earned. However, with probability PA, state A is revealed and a zero payoff is earned. The other entries have similar interpretations.

It is important to note that equation 5.1 would represent the social planners problem if DoD's preferences accurately reflected the society's true preferences over program quality.

Decision Problem 2

This section considers the Congressional decision problem in the absence of any DoD influence. Again we identify a multi-stage decision problem. In the first stage, DOT&E (the perfect agent of Congress) selects an allocation of test resources.7 After observing an information partition, DOT&E reports to Congress, who decides whether to continue or terminate the project.

Stage 1 DOT&E determines the allocation of test resources tA and tC.

Stage 2 Nature reveals an information partition to DOT&E.

Stage 3 DOT&E reports the information partition to Congress.

Stage 4 Congress continues or terminates the project.

According to assumption 1, Congress will continue only those projects in information partitions (B,C) and (C). In light of these preferences, the stage 1 objective function for DOT&E is given by equation 5.2 below:8

7  

''Congress'' and "DOT&E" are identical players in this and all subsequent games. We use the different names to mimic the role of each in the actual process.

8  

We denote Congressional objective functions with uppercase symbols and DoD objective functions with lowercase.

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

It is important to note that equation 5.2 would represent the social planners problem if Congressional preferences accurately reflected the society's true preferences over program quality.

Game 3

In this section we consider a game in which DoD determines the test resource allocation, while Congress makes the final funding decision. In this game, DoD observes the information partition and makes a report to Congress, who has not seen the information partition.

Stage 1 DoD determines the allocation of test resources tA and tC.

Stage 2 Nature reveals the information to DoD.

Stage 3 DoD makes a report to Congress regarding the information partition.

Stage 4 Congress continues or terminates the project.

This game is consistent with the time period between Public Law 92-156 and the formation of DOT&E. As described in section 2, DoD was required to report operational test results to Congress over this time period. However, Congress was not involved in the planning and conducting of the actual tests nor did they exercise any effective oversight of the test reporting stage.

In all of the games considered, we assume effective antifraud regulations but not mandatory disclosure. Although DoD is not forced to reveal information, any information it chooses to reveal must be correct. The absence of mandatory disclosure allows DoD to pool over information partitions. For example, they may choose to report information partition (B,C) when they observe B. DoD can always report less fine information than they observe (lack of mandatory disclosure) but cannot report finer information (antifraud).

In the final stage of the game, Congress will approve only those projects reported to be in partitions (B,C) and C. A perfect Bayesian equilibrium exists in which DoD pools over states B

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

and C by reporting (B,C) for both. All other partitions are reported truthfully. Whenever DoD reports to Congress something other than (B,C), Congress believes the report to be exactly what DoD observed. When DoD reports (B,C), Congress Bayesian updates its prior probabilities. There are other perfect Bayesian equilibria but this one seems to best capture the salient behavior of DoD.9

At the reporting stage of this equilibrium, the stage 1 objective function for DoD is given by equation 5.3:

The last term is a direct result of DoD's ability to pool over (B,C ) information partitions.

Game 4

This section considers the case in which DOT&E determines the allocation of test resources while DoD observes the actual test results. DoD then reports the test results to Congress, who may continue or terminate the program. We continue to assume effective antifraud but not mandatory disclosure.

Stage 1 DOT&E determines the allocation of testing resources tA and tC.

Stage 2 Nature reveals the information partition to DoD.

Stage 3 DoD makes a report to Congress, regarding the information partition.

Stage 4 Congress continues or terminates the program.

We believe that this game is consistent operational testing during the first several years after the formation of DOT&E. As described in section 2, DOT&E concentrated its early efforts

9  

We make no attempt to establish uniqueness of any equilibrium in any four games. We are focusing attention on equilibria that, again, best capture the salient behavior of DoD and Congress.

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

on improving test design and implementation. Standards for testing were established and DoD testing personnel were forced to comply. During this time period, however, DOT&E did not effectively oversee the reporting of test results to Congress.

Just as in game 3, a perfect Bayesian equilibrium exists in which DoD pools over states B and C by reporting (B,C) for each. All other partitions are reported truthfully. Congressional beliefs are as in game 3.

When (B,C) is reported then DoD will have observed one of three possible states—B, C, or (B,C). If DoD observes (B,C) then the Congressional payoff is SBC. If DoD observes B then, because we allow for information to be socially valuable, the Congressional payoff will be SB even though the report is (B,C). The same is true for C. Note that if information is not socially valuable this distinction is irrelevant. Under this assumption, the stage 1 objective function for DOT&E is given by equation 5.4:

Game 5

This section considers the case in which DoD determines the allocation of test resources, but the actual test results are observed by DOT&E. DOT&E then reports truthfully to Congress, who may continue or terminate the program.

Stage 1 DoD determines the allocation of testing resources tA and tC.

Stage 2 Nature reveals the information partition to DOT&E.

Stage 3 DOT&E reports the information partition to Congress.

Stage 4 Congress continues or terminates the program.

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

As described in section 2 Congress originally charged DOT&E with two major oversight responsibilities: test planning and test reporting. DOT&E concentrated its early efforts on improving test planning at the expense of reporting oversight. The decision to do so can be analyzed in the context of this game.

Prior to the formation of DOT&E, the status of operational testing was consistent with game 3. By concentrating its efforts in the area of test planning, DOT&E effectively shifted the operational test environment to game 4. DOT&E could have chosen to concentrate its efforts on reporting oversight. This would have shifted the testing environment to game 5. By considering game 5, we therefore gain insight into this decision.

Since DOT&E is the perfect agent of Congress, they will report truthfully the information partition revealed in stage 2. Since Congress will approve any program revealed as partition (B,C) or C, the stage 1 objective function for DoD is given by equation 5.5:

6. Welfare Results

This section examines the welfare of the players at the equilibria of the decision problems and games proposed in the previous section. The first stage of each game described above involves the solution of a constrained maximization problem in tA and tC. In all of the following analysis, we assume that the budget constraint is binding. In addition, we assume that the sufficient conditions for maxima are always satisfied. In appendix B we show that this assumption requires restrictions on only three of the five problems considered.

Substituting the constraint into the objective functions for the various games yields a series of unconstrained problems in tC. Evaluating the welfare of the players at the relevant solution for tC yields the following rankings:

Proposition 1 Under assumptions 1 and 2, Congressional welfare evaluated at the relevant solution for tC is characterized by

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

Proof of Proposition 1 To prove the first part of the proposition notice that for any value of tC, exceeds by magnitude (since SB is negative by assumption). Now evaluate both and at to yield Note that completes the proof

To formulate the Congressional objective function for game 3 we simply replace with in π3 to obtain . Notice that doing so yields exactly . Thus . Now since Congress selects the maximal value of in game 4, the welfare from game 3, in which DoD selects, cannot be higher .

By replacing with in π1 as before we obtain . Notice that for any tC, exceeds by magnitude . As in the first part of this proof, the maximal value of chosen by Congress in game 4 must exceed the maximal value of which in turn cannot be less than the value derived from DoD's choice in game 1.

To prove the second part of the proposition, replace with in π5 to obtain . Notice that . As above, the value of selected by Congress in game 2 cannot be less than the value which results from DoD's choice in game 5.

The proposition demonstrates that oversight of the test design stage (game 4) cannot decrease the welfare of the principal as compared with no oversight (game 3). However, oversight of the reporting stage (game 5) may increase or decrease the principal's welfare as compared with no oversight. Below, we examine the possibility that increased oversight may make the principal worse off.

First note that π5 can be expressed as a function of π3 by the following:

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

differentiating and evaluating the expression at the solution to game 3:

Now if we assume that the solution to game 3 involves a relatively high value of tA and a relatively low value of tC, then the bracketed term above will be positive. This is a reasonable assumption given DoD's preference for (B,C) systems. When this assumption is satisfied, will exceed . In this case, the additional oversight by the principal has the unintended effect of reducing the type C testing. Below we show that this reduction in type C testing may lead to a reduction in the principal's welfare.

Assuming as above, we compare the principal's welfare at the solution of games 3 and 5 with the following equation:

Specifically we are interested in the case in which . The first and third terms Of 6.3 are negative by assumption, but the second term is positive. Thus equation 6.3 will exceed 0 if SC is suitably large. The explicit condition is given by the following inequality:

Thus if inequality 6.4 is satisfied, additional oversight of the reporting stage will actually reduce the principal's welfare as compared to no oversight. Inequality 6.4 is most likely to be satisfied

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

when information is socially valuable. In that case, SC is large compared to SBC and the condition is easier to satisfy.

If we suppose that Congressional preferences are aligned with society's preferences, then proposition 1 sheds a favorable light on the evolution of operational testing. By concentrating on the oversight of test design, DOT&E has increased social welfare. In addition, oversight of the reporting stage in conjunction with oversight of the test design stage has moved the process toward decision problem 2. This additional oversight has also improved social welfare if Congress reflects the true preferences of the society.

We obtain a similar proposition regarding DoD welfare:

Proposition 2 Under assumptions 1 and 2, DoD welfare evaluated at the relevant solution for tC is characterized by

Proof of Proposition 2 To prove the first part of the proposition first notice that for any value of tC, π1 exceeds π3 by magnitude As in the proof of proposition 2, the maximal value of π1 must therefore exceed the maximal value from π3 The proof of follows from the same logic

To formulate π2, replace with in . Notice π5 = π2 for all tC. As in proposition 2, no other value of tC can yield a payoff for π5 in excess of the value chosen by DoD in game 5. The proof of the second part of the proposition follows precisely the same logic.

If we suppose that the society's true preferences are reflected by DoD, proposition 2 sheds an unfavorable light on the evolution of operational testing. Social welfare was highest in the absence of Congressional involvement (decision problem 1). As Congressional oversight has strengthened, social welfare has progressively declined.

Propositions 1 and 2 bound social welfare during various stages in the evolution of operational testing. In all likelihood, society's true preferences are somewhere between those of

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

Congress and DoD. Therefore, the extent to which additional oversight has increased or decreased social welfare remains an open question.

7. Qualitative Testing Results

In this section, we compare the equilibrium levels of testing which result from the decision problems and games posed in section 5. We continue to assume that the budget constraint is binding and the sufficient conditions for maxima are satisfied. In this section we make an additional assumption regarding the social value of information.

Assumption 3 Information has no social value.

Thus in this section of the paper we assume that the payoff for any multi-state information partition is exactly equal to the conditional expected value within that information partition. So, for example, by assumption.

Proposition 3 Under assumptions 1, 2, and 3 the equilibrium testing induced by the decision problems and games posed in section 5 can be ranked according to

The tedious but straightforward proof of proposition 3 is contained in appendix A. Intuitively, the proposition orders the type C testing generated by the various models and decision problems. Below we argue that this ranking captures the major features of the evolution of operational testing in the Department of Defense.

Prior to the enactment of Public Law 92-156 in 1971, DoD exercised considerable control over the entire procurement process. This influence extended not only to test design but also into the final procurement decisions. We analyze this time period with decision problem 1.

Proposition 3 reveals the general nature of the conflict between Congress and DoD over operational testing. Namely, DoD devotes less resources toward type C testing than Congress

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

would like. As a first step toward resolving this conflict, Congress required DoD to report operational test results. In the context of the model, the testing process was shifted to game 3.

Game 3 represents the status of operational testing in the time between the enactment of Public Law 92-156 and the establishment of DOT&E. Over this time period, DoD received substantial criticism for what was termed inadequate testing. In the context of our model, this inadequacy might be interpreted as a lack of resources devoted to tC. From the standpoint of DoD, the testing was completely adequate to support procurement decisions. However, Congress considered the test resource allocation to be inadequate. Proposition 3 verifies this intuition.

Section 2 describes how DOT&E concentrated its early efforts in the area of test design and implementation. DOT&E could just have easily concentrated its efforts on improving the test reporting process. However, DOT&E's limited budget probably would not have allowed them to impact both test design and test reporting. In the context of our model, this decision is simply a choice between game 4 (impact test design) and game 5 (impact test reporting). Proposition 3 reveals game 4 as the preferred option in terms of test resource allocation. As we have seen in section 6, game 5 might actually reduce the tC testing from the game 3 level. In light of proposition 3, DOT&E's decision to impact test design appears to be a rational response to the underlying incentives.

More recently, DOT&E has taken an active oversight role in the reporting of test results. As described in section 2, DOT&E personnel are responsible for independent assessments of test data. In addition, high ranking staff members are regularly called before Congress to address the desirability of procuring new systems and the adequacy of operational testing. It is important to note that these responsibilities are in addition to DOT&E's continued oversight of test design and implementation. Therefore, DOT&E now plays a significant role in all phases of the testing process. In the context of our model, the operational testing environment is moving toward decision problem 2. We have already shown that decision problem 2 obtains the highest welfare for Congress but the lowest welfare for DoD.

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

8. Conclusion

We have presented a model which extends the information transmission literature to consider the question of strategic information generation. In the context of a principal agent game, strategic information generation gives the agent an added dimension in which to manipulate the process. In response, the principal might choose to extend some oversight authority. We have shown that oversight of the test design stage cannot decrease the principal's welfare while oversight of the test reporting stage may. Our analysis has shown that the model is remarkably consistent with the evolution of testing institutions in the Department of Defense.

There are many avenues in which the present model might be extended. In the context of the Department of Defense example, the next logical step might involve an endogenous total testing budget T. Despite the Congressional oversight efforts documented above, the Department of Defense continues to maintain considerable control over the total testing budget. A more complete description of the testing environment requires this feature.

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

Appendix A: Proposition Proofs

This appendix contains the proof of proposition 3. We begin by considering a simple lemma.

Lemma 1 The following inequality is satisfied at the equilibrium of game 3 and game 5.

Proof of Lemma 1 Consider the first order condition from game 3:

which can be manipulated to form the following,

Thus, the lemma will hold if the right-hand side of A.3 is positive. When information has no social value, then . In this case, equation A.3 reduces to the following:

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

Since the right-hand side of A. 4 is positive, the lemma is shown to hold for game 3. Now consider the first order condition for game 5:

When information has no social value, equation A.5 can be simplified to the following:

As the left-hand side of equation A.6 is positive, the lemma is shown to hold for game 5.

Proof of Proposition 3 To show exceeds , we first express the objective function from decision problem 2 in terms of the game 3 objective function:

Evaluating the first order conditions from at the solution to π3 yields the following:

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

Substituting for from equation A.3, we have the following:

which reduces to the following:

Making the first term as large as possible yields the following:

Simplifying,

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

where the last inequality holds because the first term is necessarily positive and the second is positive by lemma 1.

To prove exceeds , we write the objective function from game 4 in terms of the game 3 objective function:

Taking the derivative of and evaluating at the solution to π3 yields the following:

Simplifying and proceeding as above, we have the following:

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

where the last inequality results from the fact that the first term is necessarily positive and the second is positive by lemma 1.

To show exceeds , we write II2 as a function of π5:

Evaluating the derivative of A. 16 at the solution to game 5 and proceeding as above, we have the following:

where the last inequality follows from lemma 1.

To show exceeds we write II4as a function of π5:

Evaluating the derivative of equation A.18 at the solution to game 5 and proceeding as above we have the following:

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

where the last inequality follows from lemma 1.

We begin by writing the objective function for decision problem 1 in terms of the game 5 objective function:

The first order condition for decision problem 1 evaluated at the solution to game 5 is given by the following:

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

where the last equality follows from the fact that information is not socially valuable. When information has no social value, the first order conditions for game 5 simplify to the following equation :

Combining A.22 with A.21 we obtain the following:

where the final inequality results from the negativity of RA.

To show exceeds , we write π1as a function of π3:

Evaluating the derivative of equation A.24 at the solution to game 3 and simplifying, we have the following:

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

This concludes the proof of proposition 3.

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

Appendix B: Second Order Conditions

This appendix details the implications of the concavity restrictions we impose on the objective functions in section 5. We begin by considering decision problem 1. It can easily be shown that the sufficient condition for interior maximization is given by 2L12-L11-L22 > 0, where Lij for i,j = 1,2 denotes the second partial of the constrained optimization problem with respect to arguments i and j. In the context of decision problem 1, this condition is given by the following statement:

We assume that condition B.1 is always satisfied.

The sufficient condition for game 4 can be expressed by the following statement:

We assume that condition B.2 is satisfied.

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

The sufficient condition for decision problem 2 can be expressed by the following statement:

Notice that the left-hand side of condition B.3 exceeds the left-hand side of condition B.2 everywhere. This implies that the former will be satisfied whenever the latter holds. We therefore do not need to assume concavity for decision problem 2 since it is guaranteed by condition B.2.

The sufficient condition for game 5 can be expressed as the following statement:

We assume that condition B.4 is always satisfied.

The sufficient condition for game 3 can be expressed as the following statement:

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

Notice again that the left-hand side of condition B.5 exceeds the left-hand side of condition B.4 everywhere. Again this implies that the former will be satisfied whenever the later holds. We therefore do not need to assume concavity for game 3 since it is guaranteed by condition B.4.

This appendix has shown that only three of the decision problems and games considered require a concavity assumption.

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

References

Blue Ribbon Defense Panel 1970 Report to the President and the Secretary of Defense on the Department of Defense. Washington, D.C.: U.S. Government Printing Office.


Crawford, Vincent P., and Joel Sobel 1982 Strategic information transmission. Econometrica 50(6):1431-1451.


Green, Jerry R., and Nancy L. Stokey 1980 A Two-Person Game of Information Transmission. Harvard Institute of Economic Research Discussion Paper Number 751.


Hirshleifer, J., and J.G. Riley 1992 The Analytics of Information and Uncertainty. New York: Cambridge University Press.


Jovanovic, B. 1982 Truthful disclosure of information. Bell Journal of Economics 13:36-44.


Kofman, Fred, and Jacques Lawarrée 1993 Collusion in hierarchical agency. Econometrica 61(3):629-656.

Kreps, D.M., and R. Wilson 1982 Sequential equilibria. Econometrica 50:863-894.


Matthews, Steven, and Andrew Postlewaite 1985 Quality testing and disclosure. RAND Journal of Economics 16(3):328-340.

Milgrom, P.R. 1981 Good news and bad news: Representation theorems and applications. Bell Journal of Economics 12:380-391.


National Research Council 1995 Statistical Methods for Testing and Evaluating Defense Systems: Interim Report. Panel on Statistical Methods for Testing and Evaluating Defense Systems, Committee on National Statistics. Washington, D.C.: National Academy Press.

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

Shavell, Steven 1994 Acquisition and disclosure of information prior to sale. RAND Journal of Economics 25(1):20-36.


U.S. General Accounting Office 1979a The NAVSTAR Global Positioning System—A Program With Many Uncertainties. Washington, D.C.: U.S. Government Printing Office.

1979b Need for More Accurate Weapon System Test Results to Be Reported to the Congress. Washington, D.C.: U.S. Government Printing Office.

1980 DoD Information Provided to the Congress on Major Weapon Systems Could Be More Complete and Useful. Washington, D.C.: U.S. Government Printing Office.

1983 The Army Should Confirm Sergeant York Air Defense Gun's Reliability and Maintainability Before Exercising Next Production Option. Washington, D.C.: U.S. Government Printing Office.

1987 Testing Oversight. Washington, D.C.: U.S. Government Printing Office .

1988a Aquila Remotely Piloted Vehicle: Its Potential Battlefield Contribution Still in Doubt. Washington, D.C.: U.S. Government Printing Office.

1988b Quality of DoD Operational Testing and Reporting. Washington, D.C.: U.S. Government Printing Office.

1989 Electronic Warfare: Reliable Equipment Needed to Test Air Force's Electronic Warfare Systems. Washington, D.C.: U.S. Government Printing Office.

1990a Army Acquisition: Air Defense Antitank System Did Not Meet Operational Test Objectives. Washington, D.C.: U.S. Government Printing Office.

1990b Naval Aviation: The V-22 Osprey—Progress and Problems. Washington, D.C.: U.S. Government Printing Office.

1991a Army Acquisition: Air Defense Antitank System's Development Goals Not Yet Achieved. Washington, D.C.: U.S. Government Printing Office.

1991b Electronic Warfare: Faulty Test Equipment Impairs Navy Readiness . Washington, D.C.: U.S. Government Printing Office.

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×

1991 c Electronic Warfare: No Air Force Follow-up on Test Equipment Inadequacies. Washington, D.C.: U.S. Government Printing Office.

1992 Electronic Warfare: Established Criteria Not Met for Airborne Self-Protection Jammer Production. Washington, D.C.: U.S. Government Printing Office.

Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 1
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 2
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 3
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 4
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 5
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 6
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 7
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 8
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 9
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 10
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 11
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 12
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 13
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 14
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 15
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 16
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 17
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 18
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 19
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 20
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 21
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 22
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 23
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 24
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 25
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 26
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 27
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 28
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 29
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 30
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 31
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 32
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 33
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 34
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 35
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 36
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 37
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 38
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 39
Suggested Citation:"Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing." National Research Council. 1999. Statistics, Testing, and Defense Acquisition: Background Papers. Washington, DC: The National Academies Press. doi: 10.17226/9655.
×
Page 40
Next: On the Performance of Weibull Life Tests Based on Exponential Life Testing Designs »
Statistics, Testing, and Defense Acquisition: Background Papers Get This Book
×
Buy Paperback | $56.00 Buy Ebook | $44.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The Panel on Statistical Methods for Testing and Evaluating Defense Systems had a broad mandate—to examine the use of statistics in conjunction with defense testing. This involved examining methods for software testing, reliability test planning and estimation, validation of modeling and simulation, and use of modem techniques for experimental design. Given the breadth of these areas, including the great variety of applications and special issues that arise, making a contribution in each of these areas required that the Panel's work and recommendations be at a relatively general level. However, a variety of more specific research issues were either brought to the Panel's attention by members of the test and acquisition community, e.g., what was referred to as Dubin's challenge (addressed in the Panel's interim report), or were identified by members of the panel. In many of these cases the panel thought that a more in-depth analysis or a more detailed application of suggestions or recommendations made by the Panel would either be useful as input to its deliberations or could be used to help communicate more individual views of members of the Panel to the defense test community. This resulted in several research efforts. Given various criteria, especially immediate relevance to the test and acquisition community, the Panel has decided to make available three technical or background papers, each authored by a Panel member jointly with a colleague. These papers are individual contributions and are not a consensus product of the Panel; however, the Panel has drawn from these papers in preparation of its final report: Statistics, Testing, and Defense Acquisition. The Panel has found each of these papers to be extremely useful and they are strongly recommended to readers of the Panel's final report.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!