Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 1
--> 1 Strategic Information Generation and Transmission: the Evolution of Institutions in DoD Operational Testing Eric M. Gaier, Logistics Management Institute; and Robert C. Marshall, Pennsylvania State University 1. Introduction Several important papers in the field of information and uncertainty have focused on strategic information transmission (see, for example, Milgrom, 1981; Crawford and Sobel, 1982; or Green and Stokey, 1980). The majority of this research has taken the form of principal agent games. In general, an agent observes some realization of a random variable which affects the payoff for each player. The agent then strategically signals the principal regarding the underlying realization. In the final stage, the principal takes some action which, in conjunction with the realization of the random variable, determines the payoff for each player. In equilibrium, the principal must take account of any bias in the agent's reporting strategy when determining the optimal action. We present a model which extends the information transmission literature by allowing for a continuous choice of information quality. This is accomplished by letting the agent determine the probability with which he is able to distinguish one state from its complement. We call this stage of the game test design. In equilibrium, the principal must now account for the agent's selectivity in both the information generation and reporting stages. Thus, we present a model in which information is both strategically generated and strategically conveyed. Since the preferences of the principal and the agent do not necessarily coincide, the test design and reporting process may be significantly biased in favor of the agent. The principal might choose to exercise some oversight authority in the process. The principal could do this in several ways. He might choose to extend oversight authority during the test design stage.
OCR for page 2
--> Alternatively, the principal might choose to extend oversight authority during the reporting stage. Our model considers each of these cases. As the main result of the paper, we show that oversight of the test design stage always improves the welfare of the principal while oversight of the test reporting stage may not. In addition, we consider the case in which the principal can extend oversight authority over both test design and test reporting. We believe that the model describes a wide variety of interesting situations—The promotion of assistant professors in disciplines with exceptionally thin job markets, for example. Individual departments make assessments of candidates and report to the tenure committee. Although the tenure committee makes the final decision, the departments have the necessary expertise to gather the relevant data. Typically the tenure committee establishes the criteria by which individual departments judge the candidates. In the context of our model this is interpreted as oversight of the test design phase. Another interesting application is found in the operational test and evaluation procedures used by the Department of Defense. It is in this context that we develop the model. The Department of Defense engages in two types of testing throughout the acquisition cycle. The emphasis in developmental testing is on isolating and measuring performance characteristics of individual components of a system. Developmental testing is conducted in a carefully controlled environment by highly trained technical personnel. The emphasis in operational testing, however, is on evaluating the overall capabilities and limitations of the complete system in a realistic operating environment. Operational testing is therefore conducted in a less controlled environment by trained users of the system. It is the role of this type of testing in the acquisition cycle that we investigate below. The acquisition cycle follows a series of event based decisions called milestones.1 At each milestone a set of criteria must be met in order to proceed with the next phase of acquisition. Operational testing is one of the last stages in this cycle. When a system is ready for operational testing, the exact details of the test are prepared by the independent test agencies within each Service. Tests must be prepared in accordance with 1 The interested reader is urged to see the interim report from the Panel on Statistical Methods for Defense Testing (National Research Council, 1995) for a complete description of the current acquisition cycle.
OCR for page 3
--> he Test and Evaluation Master Plan (TEMP). which spells out the critical operational issues to be addressed. The TEMP is prepared fairly early in the acquisition cycle but is continuously updated and modified. For major systems, both the TEMP and the operational test plan must receive approval from the Office of Director of Operational Test and Evaluation (DOT&E). This Congressional oversight agency was created in 1983 mainly to oversee the test design process. In this way, Congress is able to extend oversight authority into the test design portion of operational testing. Fairly regularly, resource constraints prevent the testing agencies from addressing all of the critical operational issues. In such cases, testers must determine which issues to address and which to ignore. The independent test agencies conduct the operational tests and evaluate the results. These evaluations are conveyed directly to the Service Chief who reports the results to the relevant milestone decision authority. In the case of major systems, decision authority rests with the Undersecretary of Defense for Acquisition and Technology who is advised by the Defense Acquisition Board. If the Undersecretary approves the acquisition, a procurement request is included in the Department of Defense budget request submitted to Congress. In addition, independent evaluations of test data are conducted by DOT&E who reports directly to the Secretary of Defense and Congress. In so doing, DOT&E also exercises oversight authority in the reporting process. The role of operational testing in the acquisition cycle has not always been characterized by the description given above. In fact, the entire procurement process has slowly evolved through a series of reform initiatives. Section 2 provides a brief description of the history of the role of operational testing in the acquisition cycle. We then introduce the model in order to gain insight into this process. Section 3 provides an overview of the related literature. Section 4 develops the modeling framework and lists the assumptions of our model. In section 5 we introduce several games which are designed to capture the role of operational testing at various points in time. Our results are presented in sections 6 and 7. We conclude with section 8.
OCR for page 4
--> 2. Historical Evolution of OT&E The Air Force is generally considered to have been the early pioneer in operational testing. As early as May 1941, the Air Force Air Proving Ground Command was involved in the testing of new aircraft designs for possible procurement. Although operational testing in the other Services was soon initiated, the absence of strong oversight from the Department of Defense allowed each Service to develop unique regulations and procedures. Prior to 1970, for example, the Navy relied heavily on the subjective opinions of a few well-qualified officers. Little emphasis was given to the generation of verifiable data. Over the same time period, however, the Air Force had gone to great lengths to define a set of formal procedures and guidelines for the conduct of OT&E. As a result, Air Force testing generally produced objective data but lacked the flexibility to adjust to the specific requirements of individual systems. Prior to 1971, the organization of OT&E also varied substantially across the Services. Although the Navy's test agency reported directly to the Chief of Naval Operations, the Air Force and Army test agencies were subordinate to lower levels of command. The Air Force and the Army were repeatedly criticized for allowing their testing agencies to report to organizations which were responsible for the development of new systems. Partially in response to these concerns, the Deputy Secretary of Defense directed the military services in February 1971 to designate OT&E field commands independent of the system developers and the eventual users. These agencies were instructed to report directly to the relevant Chief of Staff. Navy testing responsibility continued to reside with the Operational Testing and Evaluation Force (OPTEVFOR), while testing responsibility was assigned to the Air Force Test and Evaluation Command (AFTEC)2 and the Army Operational Test and Evaluation Agency (OTEA). Prior to 1971 the Department of Defense was not required to convey the results of operational testing to the Congress. In the absence of testing data, Congress generally deferred to DoD expertise on program funding allocations. In addition, Congress was not involved in the 2 AFTEC has now become the Air Force Operational Test and Evaluation Command (AFOTEC).
OCR for page 5
--> design or implementation of operational testing. Over this time period, therefore, the Department of Defense was able to exert considerable influence over the status of individual programs. As part of its continued effort to become more involved in the procurement process, Congress enacted Public Law 92-156 in 1971. This law requires the Department of Defense to report OT&E results to the Congress annually. Armed with these testing results, Congress began to take a more active role in determining which programs to fund and which to terminate. However, the design and conduct of operational testing continued to be the responsibility of the Department of Defense. Although Public Law 92-156 certainly reduced DoD's explicit influence over funding decisions, DoD continued to exert considerable influence over the acquisition process through its choice of operational tests. The model will show how DoD might have altered its testing strategy in light of Congressional involvement. Over the period 1971 through 1983, Department of Defense testing procedures received strong criticism from Congress and the General Accounting Office (GAO). Many of these complaints focused on a perceived inadequacy in DoD testing. In 1983, for example, GAO determined that reliability and maintainability testing on the Army's Sergeant York Air Defense Gun had been inadequate to support the production decision (U.S. General Accounting Office, 1983). Similarly, the President's 1970 Blue Ribbon Defense Panel concluded that both developmental and operational testing of the Army M-16 rifle had been inadequate (Blue Ribbon Defense Panel, 1970). In 1979, GAO concluded that developmental testing was also inadequate in the case of the joint Air Force/Navy NAVSTAR Global Positioning Systems (GPS) (U.S. General Accounting Office, 1979a). Although such criticisms are certainly not limited to the time frame described above, the model will show in what sense testing might have been perceived as inadequate.3 As a result of allegations such as these, Congress became increasingly more concerned with the planning and conduct of DoD testing in the Department of Defense. The President's Blue Ribbon Panel also recommended the creation of a higher than Service level organization to help give direction to the operational test agencies. In 1983, Congress instructed DoD to create 3 The Army's Aquila Remotely Piloted Vehicle (U.S. General Accounting Office, 1988a) is an example of a program which was criticized for inadequate testing outside the time period described.
OCR for page 6
--> the Office of Director of Operational Test and Evaluation to fill this oversight role. DOT&E is headed by a civilian who is appointed by the President and confirmed by the Congress. DOT&E is charged with two primary roles. First, DOT&E is directed to be the principle advisor to the Secretary of Defense regarding OT&E matters. Second, DOT&E is directed to report to Congress on the adequacy of operational testing and the desirability of allowing systems beyond low rate initial production. In fulfilling these primary roles, DOT&E has assumed several responsibilities. First, DOT&E is responsible for the proscription of policies and procedures for the conduct of OT&E. Second, DOT&E provides advice to the Secretary of Defense and makes recommendations to military departments regarding OT&E in general and on specific aspects of operational testing for major systems. In this regard, operational test plans for major acquisitions require DOT&E approval. Third, DOT&E monitors and reviews the conduct of OT&E by the Services. Fourth, DOT&E is responsible for an independent analysis of the results of OT&E for each major system and must report directly to the Secretary of Defense, the Senate and House Armed Services Committees, and the Senate and House Committees on Appropriation. In each case, DOT&E is directed to analyze the adequacy of operational testing as well as the effectiveness and suitability of the tested system. Fifth, DOT&E is responsible for advising the Secretary of Defense regarding all budgetary and financial matters relating to OT&E. It is well documented that DOT&E had only a limited impact for the first several years of its existence (U.S. General Accounting Office, 1987). The post of Director remained vacant for nearly two years while the Office continued to be underfunded and understaffed. During this time, DOT&E received criticism for failing to adequately monitor Service operational testing. In addition, the Government Accounting Office determined that DOT&E reports to the Secretary of Defense and the Congress were not composed independently as required by law. In several instances GAO found DOT&E reports which were copied verbatim from Service documents. In the first several years, DOT&E was therefore unable to fulfill one of its major responsibilities. DOT&E was, however, largely successful in its early attempts to improve test planning and implementation. To this end, DOT&E developed a uniform set of guidelines for Service operational testing and revised Department of Defense Directive 5000.3 Test and Evaluation. In
OCR for page 7
--> 1987, GAO determined that DOT&E had significantly impacted the testing process though its careful review of operational test plans (U.S. General Accounting Office, 1987). On many occasions, the Services were required to make significant revisions in operational test plans for major acquisitions in order to get DOT&E approval. GAO concluded that the adequacy of operational testing was significantly improved by DOT&E's efforts in this regard. Our model will yield considerable insight into DOT&E's decision to reform the test planning process at the expense of ignoring the reporting process. Since the formation of DOT&E, the Department of Defense has faced renewed criticism. The Government Accounting Office and the DoD Inspector General have accused DoD officials of manipulating test results to yield the most favorable interpretation possible. The most highly publicized case involved the Navy's Airborne Self-Protection Jammer (ASPJ) (U.S. General Accounting Office, 1992). The specific allegations stemmed from the reporting of reliability growth test results which were being conducted as part of Initial Operational Test and Evaluation. After testing had begun, Navy testers changed the testing criteria to exclude certain self-diagnostic software failures as not relevant. With these failures excluded ASPJ, was reported to have passed the test criteria. However, the inclusion of this data would have resulted in a test failure. Similar allegations have been levied against other programs including the various electronic countermeasures programs of the 1980s (U.S. General Accounting Office, 1989, 1991 b, 1991 c) and the Army's Air Defense Antitank Systems (ADATS) (U.S. General Accounting Office, 1991a, 1990a). Although criticisms of the reporting process are not limited to the time period described, the model will yield considerable insight into this reporting phenomenon.4 In response to allegations such as these, DOT&E has concentrated additional efforts toward oversight of the test reporting process. DOT&E officials have begun to monitor the progress of operational testing on site. In addition, DOT&E officials currently conduct independent evaluations of operational test results. These evaluations are drawn directly from the raw test data and are not subject to DoD interpretation. DOT&E reports directly to the 4 See any of the following GAO publications for additional criticisms of the reporting process (U.S. General Accounting Office, 1979b, 1980, 1988b).
OCR for page 8
--> Congress. If DoD disagrees with any of the conclusions reached by DOT&E, it may append the report to Congress with its own comments. 3. Related Literature An important avenue of research on the topic of information transmission was initiated by Milgrom (1981). As an application of more general theorems regarding the monotone likelihood ratio property (MLRP), Milgrom introduces games of persuasion. In a persuasion game an interested party (agent) possesses private information regarding the underlying state of nature and attempts to influence a decision maker (principal) by selectively providing data. For example, the agent might be a salesman who has information regarding the quality of his product and selectively conveys a subset of the data to a consumer. In equilibrium, the consumer accounts for the salesman's selectivity in reaching a consumption decision. By assumption, the agent is unable (or unwilling because of infinite penalties) to communicate reports which are incorrect. Matthews and Postlewaite (1985) have described this assumption as the imposition of effective antifraud regulations. In light of these antifraud regulations, reports from the agent are limited to supersets of the truth. The salesman may, for example, claim that the product meets or exceeds some criteria if and only if the criteria is satisfied. At the discretion of the agent, however, the report may range from entirely uninformative to absolutely precise. Milgrom shows that a Nash equilibrium always exists in which the principal resolves to ignore all reports and the agent makes only uninformative reports. However, a proposition demonstrates that every sequential equilibrium (Kreps and Wilson, 1982) of the persuasion game involves precise revelation of the truth by the agent. At the sequential equilibrium, the principal believes that any information withheld by the agent is extremely unfavorable. In the face of such extreme skepticism the agent's best response is truthful revelation. Matthews and Postlewaite (1985) extend Milgrom's model by adding an earlier stage in which the agent chooses whether or not to become informed. They assume that the cost of
OCR for page 9
--> acquiring information is zero. In this context, they distinguish between mandatory disclosure and antifraud regulations. Under mandatory disclosure, an agent must disclose whether or not he has acquired information. Mandatory disclosure does not, however, require the truthful conveyance of information acquired. Truthful reporting of information is still governed by antifraud. Matthews and Postlewaite assume effective antifraud throughout the paper but consider variations of the model with mandatory disclosure and without. Using the solution concept of sequential equilibrium, Matthews and Postlewaite examine the dependence of information acquisition upon disclosure rules. They show that the agent will acquire and fully disclose information whenever disclosure is not mandatory. When disclosure is mandatory, however, the agent may or may not acquire information. Note that in the presence of antifraud, agents who do not acquire information must report total ignorance to avoid any chance of misrepresenting the truth. In the absence of mandatory disclosure, the sequential equilibrium calls for the principal to adopt extreme skepticism toward any report of ignorance. In the face of such extreme skepticism, agents choose to acquire information and fully reveal. The extreme skepticism on the part of the principal completely unravels any possible equilibrium claim of ignorance by the agent. Results such as these have been termed unraveling results. Avoiding this unraveling requires some type of credibility for claims of ignorance by the agent. In the context of their model, mandatory disclosure provides this credibility and impedes the unraveling. Shavell (1994) extends the model of Matthews and Postlewaite in several important directions. Shavell allows the cost of acquiring information to be privately held by the agents. Shavell also considers cases in which the information acquired may be socially valuable. Socially valuable information increases the underlying value of the exchange between the agent and the principal. As in Matthews and Postlewaite, Shavell assumes effective antifraud and analyzes the impact of mandatory disclosure. Shavell shows that unraveling may be impeded even in the absence of mandatory disclosure. At the sequential equilibrium, two types of agents claim ignorance. The first type have realized cost draws which exceed the expected value of acquiring information. They are truly ignorant. The second type have acquired information which was so unfavorable that they
OCR for page 10
--> achieve a higher payoff by claiming ignorance. In equilibrium, the principal simply assigns the appropriate probability to each type when computing his reservation value for exchanges with agents claiming ignorance. Unraveling is also impeded when the information acquired is socially valuable. In short, the privacy of the cost draw gives credibility to the claims of ignorance by the agents. This credibility is enough to preclude the unraveling effect. Such a result is in stark contrast with Matthews and Postlewaite. This contrast highlights the critical importance of the assumption regarding the distribution of costs. When the cost distribution is not degenerate, the unraveling effect is impeded and the principal must give credibility to claims of ignorance.5 However, as the cost distribution becomes degenerate the principal's skepticism completely unravels any claim of ignorance by the agent. In Matthews and Postlewaite, therefore, it is not the assumption that the costs of acquiring information are zero which drives the unraveling result. Rather it is the degeneracy of the cost distribution. Jovanovic (1982) reaches a similar conclusion by imposing privately known costs of conveying information upon the agent. It seems clear that some private information on the part of the agent is required to avoid the unraveling effect. Kofman and Lawarrée (1993) present a variant in which the agent takes an action which partially determines the state of nature. Although the state of nature is revealed to the principal, the action taken by the agent is not observed. In this context, the principal may employ an internal auditor to gather more accurate information regarding the agent's action. The model allows for the possibility that the internal auditor may be involved in a collusive agreement with the agent. In equilibrium, however, collusion is stymied by bounty hunter contracts in which the principal gives any penalty extracted from the agent directly to the auditor. Kofman and Lawarrée also consider the case in which an external auditor may be employed. The external auditor does not have the possibility of colluding with the agent, but lacks the expertise to gather data as accurately as the internal auditor. A proposition determines the conditions under which the principal will use the internal auditor, the external auditor, or 5 In this context, degeneracy requires only a support for the cost distribution which does not include the value of acquiring information.
OCR for page 11
--> both. Although they do not elaborate, Kofman and Lawarrée indicate that the model is consistent with the relationship between Congress and the Department of Defense. Perhaps DOT&E would play the role of the external auditor and the Service test agencies would play the role of the internal auditor. Crawford and Sobel (1982) take an entirely different approach to games of information transmission. In their model, the preferences of the two parties are somewhat aligned. Crawford and Sobel completely relax the antifraud assumption to allow for a type of cheap talk communication. Although equilibrium messages will not necessarily involve full disclosure, they show that antifraud is not violated at equilibrium. Crawford and Sobel show that all the Bayesian Nash equilibria are partition equilibria. In a partition equilibria, the agent introduces noise into his report by partitioning the state space and reporting only the partition in which the realization lies. The size of the individual partitions varies directly with the proximity of the parties preferences. For identical preferences, the partitions will be infinitely small and the report will be precise. As preferences differ, the partitions grow in size and the agent attempts to pool over larger and larger realizations. If preferences are suitably different, the agent partitions the state space into a single partition which amounts to a claim of ignorance. Crawford and Sobel show that if the preferences of the parties do not coincide, the equilibrium number of partitions is always finite. Thus information is never perfectly revealed. Such a result is also in sharp contrast with the results from Milgrom and Matthews and Postlewaite. Green and Stokey (1980) consider a similar game from an alternate perspective. The preferences of the parties are held constant while the information structure itself is varied. Green and Stokey demonstrate that a more informative information structure does not necessarily imply higher welfare for the parties.6 Examples are constructed in which the welfare of each party is either reduced or enhanced by improvements in the information structure. In addition, Green and Stokey identify several types of equilibria including partition equilibria. For the purpose of 6 One information structure is said to be more informative than another if it provides higher expected utility for a decision maker regardless of the utility function. See Hirshleifer and Riley (1992) for a complete discussion.
OCR for page 30
--> where the last inequality holds because the first term is necessarily positive and the second is positive by lemma 1. To prove exceeds , we write the objective function from game 4 in terms of the game 3 objective function: Taking the derivative of and evaluating at the solution to π3 yields the following: Simplifying and proceeding as above, we have the following:
OCR for page 31
--> where the last inequality results from the fact that the first term is necessarily positive and the second is positive by lemma 1. To show exceeds , we write II2 as a function of π5: Evaluating the derivative of A. 16 at the solution to game 5 and proceeding as above, we have the following: where the last inequality follows from lemma 1. To show exceeds we write II4 as a function of π5: Evaluating the derivative of equation A.18 at the solution to game 5 and proceeding as above we have the following:
OCR for page 32
--> where the last inequality follows from lemma 1. We begin by writing the objective function for decision problem 1 in terms of the game 5 objective function: The first order condition for decision problem 1 evaluated at the solution to game 5 is given by the following:
OCR for page 33
--> where the last equality follows from the fact that information is not socially valuable. When information has no social value, the first order conditions for game 5 simplify to the following equation : Combining A.22 with A.21 we obtain the following: where the final inequality results from the negativity of RA. To show exceeds , we write π1 as a function of π3: Evaluating the derivative of equation A.24 at the solution to game 3 and simplifying, we have the following:
OCR for page 34
--> This concludes the proof of proposition 3.
OCR for page 35
--> Appendix B: Second Order Conditions This appendix details the implications of the concavity restrictions we impose on the objective functions in section 5. We begin by considering decision problem 1. It can easily be shown that the sufficient condition for interior maximization is given by 2L12-L11-L22 > 0, where Lij for i,j = 1,2 denotes the second partial of the constrained optimization problem with respect to arguments i and j. In the context of decision problem 1, this condition is given by the following statement: We assume that condition B.1 is always satisfied. The sufficient condition for game 4 can be expressed by the following statement: We assume that condition B.2 is satisfied.
OCR for page 36
--> The sufficient condition for decision problem 2 can be expressed by the following statement: Notice that the left-hand side of condition B.3 exceeds the left-hand side of condition B.2 everywhere. This implies that the former will be satisfied whenever the latter holds. We therefore do not need to assume concavity for decision problem 2 since it is guaranteed by condition B.2. The sufficient condition for game 5 can be expressed as the following statement: We assume that condition B.4 is always satisfied. The sufficient condition for game 3 can be expressed as the following statement:
OCR for page 37
--> Notice again that the left-hand side of condition B.5 exceeds the left-hand side of condition B.4 everywhere. Again this implies that the former will be satisfied whenever the later holds. We therefore do not need to assume concavity for game 3 since it is guaranteed by condition B.4. This appendix has shown that only three of the decision problems and games considered require a concavity assumption.
OCR for page 38
--> References Blue Ribbon Defense Panel 1970 Report to the President and the Secretary of Defense on the Department of Defense. Washington, D.C.: U.S. Government Printing Office. Crawford, Vincent P., and Joel Sobel 1982 Strategic information transmission. Econometrica 50(6):1431-1451. Green, Jerry R., and Nancy L. Stokey 1980 A Two-Person Game of Information Transmission. Harvard Institute of Economic Research Discussion Paper Number 751. Hirshleifer, J., and J.G. Riley 1992 The Analytics of Information and Uncertainty. New York: Cambridge University Press. Jovanovic, B. 1982 Truthful disclosure of information. Bell Journal of Economics 13:36-44. Kofman, Fred, and Jacques Lawarrée 1993 Collusion in hierarchical agency. Econometrica 61(3):629-656. Kreps, D.M., and R. Wilson 1982 Sequential equilibria. Econometrica 50:863-894. Matthews, Steven, and Andrew Postlewaite 1985 Quality testing and disclosure. RAND Journal of Economics 16(3):328-340. Milgrom, P.R. 1981 Good news and bad news: Representation theorems and applications. Bell Journal of Economics 12:380-391. National Research Council 1995 Statistical Methods for Testing and Evaluating Defense Systems: Interim Report. Panel on Statistical Methods for Testing and Evaluating Defense Systems, Committee on National Statistics. Washington, D.C.: National Academy Press.
OCR for page 39
--> Shavell, Steven 1994 Acquisition and disclosure of information prior to sale. RAND Journal of Economics 25(1):20-36. U.S. General Accounting Office 1979a The NAVSTAR Global Positioning System—A Program With Many Uncertainties. Washington, D.C.: U.S. Government Printing Office. 1979b Need for More Accurate Weapon System Test Results to Be Reported to the Congress. Washington, D.C.: U.S. Government Printing Office. 1980 DoD Information Provided to the Congress on Major Weapon Systems Could Be More Complete and Useful. Washington, D.C.: U.S. Government Printing Office. 1983 The Army Should Confirm Sergeant York Air Defense Gun's Reliability and Maintainability Before Exercising Next Production Option. Washington, D.C.: U.S. Government Printing Office. 1987 Testing Oversight. Washington, D.C.: U.S. Government Printing Office . 1988a Aquila Remotely Piloted Vehicle: Its Potential Battlefield Contribution Still in Doubt. Washington, D.C.: U.S. Government Printing Office. 1988b Quality of DoD Operational Testing and Reporting. Washington, D.C.: U.S. Government Printing Office. 1989 Electronic Warfare: Reliable Equipment Needed to Test Air Force's Electronic Warfare Systems. Washington, D.C.: U.S. Government Printing Office. 1990a Army Acquisition: Air Defense Antitank System Did Not Meet Operational Test Objectives. Washington, D.C.: U.S. Government Printing Office. 1990b Naval Aviation: The V-22 Osprey—Progress and Problems. Washington, D.C.: U.S. Government Printing Office. 1991a Army Acquisition: Air Defense Antitank System's Development Goals Not Yet Achieved. Washington, D.C.: U.S. Government Printing Office. 1991b Electronic Warfare: Faulty Test Equipment Impairs Navy Readiness . Washington, D.C.: U.S. Government Printing Office.
OCR for page 40
--> 1991 c Electronic Warfare: No Air Force Follow-up on Test Equipment Inadequacies. Washington, D.C.: U.S. Government Printing Office. 1992 Electronic Warfare: Established Criteria Not Met for Airborne Self-Protection Jammer Production. Washington, D.C.: U.S. Government Printing Office.
Representative terms from entire chapter: