Click for next page ( 263


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 262
E Military Applications of Scientific Information JAMES E. SCHROEDER The research and development process in a military environment is difficult to characterize; there are probably as many exceptions as there are rules. Nevertheless, it is important to put the committee's findings in the larger context. This appendix was prepared at the request of the committee to provide general knowledge based on the author's obser- vations of how the process works. Many of the ideas expressed do not describe formal policy. Discussion is limited to the field of applied psychology and may or may not generalize to hardware development. Although the following discussion is centered around military research and development, there are probably meaningful parallels in other, nonmilitary research and development programs. The reader is advised to read Crawford (1970) and Drucker (1976) for other discussions of research, development, and utilization of psychological products in the Army. One common representation of the ideal process is provided by the Department of Defense research and development funding taxonomy, which defines the process in terms of the four funding steps shown in Figure E-1. In this model, the research and development process is represented by a funding continuum ranging from basic science through engineering development. In the ideal case, a potentially useful scientific finding would emerge from the basic research laboratory. This information is then "picked up by" or "handed to" applications-oriented scientists in the military setting for applied research and exploratory development. If the resulting applied research findings are promising and there are potential applications, then a project would proceed to the advanced 262

OCR for page 262
APPENDIX E r Basic Research - Applied Research and Exploratory Development Advanced Development Engineering Development - 263 FIGURE E-l Schematic representation of the transfer of knowledge from the basic science laboratory to a final product. development stage for further enhancement and adaptation to a particular setting. In the engineering development stage, the specific engineering design requirements are made and actual delivery equipment or software is developed. With the apparent logic and simplicity of this model, it is often difficult for people outside the system to understand why the transfer of new scientific information is slow or absent. Individuals and organizations within the development continuum complain of deficiencies in the other sectors. Basic scientists cannot understand why their theories or findings have not been applied, and applied scientists question why basic scientists don't work on topics with more application potential (Weinstein, 19861. SOURCES OF "ERROR" IN THE RESEARCH AND DEVELOPMENT PROCESS Most people who are familiar with military research and development would probably agree that the model just described, while presenting a useful ideal, is deceptively simple, and the actual process is tremendously more complex. For the sake of simplicity, consider two general classes

OCR for page 262
264 APPENDIX E of errors that can be made at a multitude of points along the research and development continuum. To borrow terms from hypothesis testing, let Type I error represent a class of errors that result in an invalid or inapplicable idea, procedure, theory, and so on, being inaccurately assessed as valuable and continuing in the development process. Let Type II error represent a set of errors in which a truly valuable potential application, for whatever reason, does not continue on the development path. These two types of errors can occur at any point along the continuum represented in Figure E-1; however, it is worthwhile to divide the source of the error into two major categories. In this appendix, a within-step source of error refers to an error of either type that occurs as a result of the operations performed inside any of the boxes shown in Figure E-1; a between-step source of error refers to either type of error that occurs as a function of the procedures involved in handing off a project from one level to another level (represented by the arrows in Figure E-11. WITHIN-STEP SOURCES OF ERROR In general, these sources of error refer to traditional research design problems. While remaining troublesome issues that must be adequately dealt with in either basic or applied settings, many of these sources of error have already been identified. In addition, there is a substantial literature describing ways to eliminate, avoid, minimize, or measure most of these contaminating sources of error; Chapter 2 of the committee's report deals with some of these issues. It should be noted that these potential errors could occur in any of the boxes, since experimental evaluations presumably occur at all stages. In summary, even if there is a potential and obtainable product that could evolve from some basic science finding, there are still many potential pitfalls within the steps taken along that path. While these pitfalls are clearly dangerous, they are widely known, and scientists have discovered and promulgated ways of recognizing, avoiding, and adjusting for most of them. BETWEEN-STEP SOURCES OF ERROR The arrows in Figure E-1 are deceptive. To the casual observer who is not familiar with the process, they would indicate a smooth flow of information from one step to the next. Indeed, this flow is often surprisingly smooth when one considers the multitude of issues involved in this evolution. As defined above, a between-step source of error is any condition that produces or contributes to one of two possible error states:

OCR for page 262
APPENDIX E 265 failure to continue an effort that actually could provide a significant improvement, or continuing an effort that actually has no significant fielded potential. There is a great deal of activity which must occur between the steps identified in Figure E-1. Much of this activity involves complex decision making in which uncertainty, political considerations, readiness consid- erations, and cost considerations are often great. In the following section, Figure E-1 is revisited, with more careful attention paid to the transitions. DECISIONS INVOLVED IN THE RESEARCH AND DEVELOPMENT PROCESS Figure E-2 provides some information about the complexity of the decisions involved as an original idea is transformed into a meaningful product. Two additional stages-implementation and sustainment have been added to the traditional steps shown in Figure E-1 because they are very important in ensuring that products are fully utilized. Implementation refers to the steps taken to successfully field a product, and sustainment refers to the steps taken to maximize the use of the product for its maximum life-cycle. It is important to note that Figure E-2 and the following discussion are probably not complete. The purpose of this appendix is to provide the reader with a sample of the complexity involved in carrying an idea from conception to some useful military application, not to provide a complete documentary of the Army's research and development process for psychological products as it has emanated from Department of the Army regulation 10-7 (U.S. Department of the Army, 1981) or transfer of technology issues. For examples of this kind of documentation, see Morton, 1969;Gruberand Marquis, 1969;Seurat, 1979;A11en Corporation, 1985. DECISIONS LEADING TO BASIC RESEARCH An imperfect mechanism is involved in all of the transitions shown in Figure E-2: namely, human decision making. Although the basic research scientist, the applied scientist, and the evaluator all use techniques designed to eliminate, measure, or at least attempt to minimize various sources of error, it is nevertheless true that decisions about what ideas find their way to useful products are still based on human decision making and, consequently, are vulnerable to the imperfections and potential biases of that process (Tversky and Kahneman, 1974; Lichtenstein et al., 1978; Slovic, 1972; Kahneman and Tversky, 19791. Ideally, applied military research programs benefit from the entire pool

OCR for page 262
266 7. 1. What basic research gets funded? 2. Is the level of effort sufficient? 3. Does the applied scientist have input? 4. Does the potential user have input? 5. Is there a theoretical or empirical basis? 2. 6. What is the history of success in the 3. area? Is there application potential? v Basic Research Is there sufficient empirical support in the literature? 2. Was the basic research conducted sufficiently high in quality and Internal validity? 3. Are successes noticed by the right people? 4. Are there potential applications? 5. Are potential applications noticed? 6. Is there an "agent" (Individual or organization)? 7. Is there a user who can profit? 8. Is there a sponsor with sufficient funds? 9. Does the basic scientist have input? v Applied Research and Exploratory Development v What applied research gets selected for development? 2. Are successes noticed? 3. Are failures reassessed? 4. Is the user still interested? 5. Is there still a sponsor with funds? APPENDIX E Advanced Development 1. What gets selected for engineering development: How effective Is the prototype? What are the cost-benefit factors? v Engineering Development 3. 4. 5. 6. 7. 8. Who produces the final product? How is It promulgated, distributed, malntalned, replaced, and so on? Is there command emphasis? Who will oversee the implementation? Is there flexibility In the product? Is there flexibility In the system? Does the targeted audience still want the product? V Implementation v 1. Is there 8 motive for using the product? 2. Is there still command emphasis? 3. Does the product have face validity? 4. Does the user feel the product works? 5. Is there a vehicle for updating the product? 6. Is the product available? 7. What has been done to Insure continued use? In the case of a psychological product, what Is the vehicle for transfer? V Sustainment FIGURE E-2 Schematic representation with more detail about the decisions involved in the research and development process.

OCR for page 262
APPENDIX E 267 of basic science research. Since there is a large pool of funding sources for basic science (e.g., universities, foundations, private companies, government scientific agencies, government military agencies), the com- ments in this section are relatively general and may be relevant to any funding agency. In contrast, comments in the rest of the appendix are limited to military research and development. Although decisions about which ideas, concepts, and theories receive basic research funding are usually made by experts, they are still subject to sources of bias. In fact, experts may be susceptible to special sources of error because of their expertise (for the "mind snapping shut" phenomenon, see Perrin and Goodman, 1978; Zeleny, 1982; for overcon- fidence, see Lichtenstein and Fischhoff, 19771. The following are possible sources of bias that, whether leading to a correct or an incorrect decision,- probably do affect the chances of a research proposal's getting through the initial gate: . . . 1. Although many funding agencies attempt to conduct blind reviews, in practice this is often difficult, because information in the proposal provides the expert reviewer clues to the author's identity. Any hints of identity can produce other potential biases, such as the identity of the university or organization involved, the reputation of the investigator in the proposed field, the investigator's publications, and so on. 2. Even if the author is unidentified or the reviewer is able to discount the author's identity, there are inevitably references to theoretical posi- tions and scientific philosophy that could provide identification and subtly bias a reviewer. 3. There may be subtle or not-so-subtle political pressures on reviewers to fund certain areas. For example, if the news media highlight some new procedure as promising (even though such claims may not be founded in data), there will almost certainly be some pressure (internal or external) placed on an agency or reviewer to give such proposals special attention. 4. There may be biases on the part of some reviewers to reject proposals that are radically different from the existing literature, have little or no empirical support, or are generated by nontraditional sources. While logically defensible, such a stance might stifle valuable new approaches. 5. Some reviewers might be subject to the influence of early results. Early results (positive or negative) may carry more weight than is justified, especially if popularized in the media. In addition, there may be a bias for positive results to be published in the literature (Sterling, 1959; Rosenthal, 19661. 6. Some research topics have acquired distinct reputations based on a history of findings in a given direction. This may produce a bias, leading

OCR for page 262
268 APPENDIX E some reviewers to reject a proposal because a significantly different or novel approach may be involved in the proposed research. 7. There may be pressure on some agencies to fund new ideas, stay on the cutting edge, or be the first to discover something. This probably leads to a bias to fund different as long as they are not too different- topics. While progressive, such biases could leave promising older approaches without funding and hence without progress. Psychological research appears to be novelty-oriented, with many investigators following the lead of a relatively small number of intellectual entrepreneurs. While the work of a few investigators receives great attention, the systematic and tedious investigations of traditional scientists may go without funding or appropriate recognition. 8. Decisions are often made on the basis of a small number of reviewers. Procedures that have been developed to minimize biases in group decision making (e.g., the Delphi or modified Delphi procedures, Linstone and Turoff, 1979) are often not used because of time or budget constraints. 9. Scientific reviewers have been trained to be critical. The critical review is, of course, an important and necessary part of the scientific process. Reviewers obtain and retain respect and credibility among their colleagues by identifying all possible faults. The dancer is that a poorly - written proposal, one tnat ooes not ~o~ow a prescribed professional format, or one that deviates significantly from the reviewers' expectations, may not be funded, even though a potentially valuable contribution might result. 10. Some reviewers tend to favor proposals that are founded in existing theory; of course, there are probably some reviewers who have the oDoosite bias. The potential danger is that, if proposals that offer to ~ ~ 1 1 _ ^~ _ _C' ~ __~ _~:~1 ~ +~ 1~D investigate simple empirical relat~onsn~ps are rejectee oe`;ause my a theoretical basis, many potentially useful and applicable research proposals may never be funded. Further exacerbating the problem, investigators submitting research proposals are asked to Justify weir research. Investigators offering proposals not based on theory may be more likely to examine and provide real-wodd applications as a justifi . . . . .. ~,, . , ,, cation for their work, while investigators whose proposals are based on theory may be more likely to offer refinement of the theory as a justification. If this is true, then any reviewer with a bias toward existing theory may inadvertently eliminate research that has been targeted for specific applications. 11. Scientists making decisions about funding for basic science are usually basic scientists and may not be application-oriented or trained in applied science. Basic science holds a higher place in many graduate education programs. As a result, there may be a lasting, and probably unintentional, bias toward pure science, a lack of familiarity with issues

OCR for page 262
APPENDIX E 269 involved in applied science, and a lack of understanding of applied issues. While applied scientists may have at least a minimal understanding of basic science in their field, basic scientists may never have been exposed to applied science. One potential result is that some reviewers of basic science proposals may neither recognize nor adequately weigh the potential application value of some research efforts. Application potential should not become the crucial criterion for funding; many important ampl~ratir~nc have Em. from heroic research for which there were no known applications at the time (e.g., Boole's development of binary algebra). However, application potential should remain one of several criteria to be considered by all reviewers, especially when the sponsor is expecting a useful and usable product. 12. There is often a lack of communication between the basic scientist ~ lJJ~A_~AV ~ ~^-^~ . and the applied scenes or potential user. There is a need for more exchange of information between the two communities. Ongoing dialogue would help the applied scientist anticipate and plan applications based on promising basic research findings, would help the basic scientist target research for specific applications, and would help the basic science funding reviewer identify areas in which considerable needs and oppor- tunities exist. It should be noted that, while such communication does exist, as evidenced by the work of this committee, more is needed. The above considerations partly determine whether a given basic science research proposal is funded. From a funder's economic view, probably the worst error is funding an effort that leads to nothing. From an advancement of science view, probably the worst error is failure to fund a potentially valuable effort. Funding an invalid approach will usually be detected during the basic research efforts or later, during the applied research efforts. The rejection of a potentially valuable effort may mean its demise, unless the researcher is adaptive, devoted, and persistent. DECISIONS LEADING TO APPLIED RESEARCH AND EARLY DEVELOPMENT The first requisite for making the transition from basic to applied research is that there exists a substantial base of support for a given approach in the basic scientific literature. The major purpose of the committee's report is to provide the Army with facts and expert opinions about whether such support exists for the identified techniques and whether the research conducted was internally valid. The following general discussion of the process assumes that those essential criteria have been met. While significant empirical support is a necessary condition for this

OCR for page 262
270 APPENDIX E transition, it is not a sufficient condition. The findings must be recognized by the `' right people" usually an applied scientist, a sponsoring research agency, or a potential user. Of course, there must be true application potential, and that potential must also be noticed. One of the most important conditions is that there be a motivated agent. The agent must be a combination of entrepreneur, producer, director, motivator, broker, advocate, and salesperson. This individual or group usually provides the impetus for the move from basic to applied research and, if successful, into development. The agent could be an applied scientist, a research agency, a sponsor looking for projects with high potential, or an end user. In any case, the agent usually locates a potential end user, an applied research agency to conduct the work, and a sponsor with sufficient interest and funds. There is a potential bias if the applied scientist becomes the agent, but this bias is probably no greater than the bias created when the basic scientist takes on similar roles when seeking funds, except that in the latter case an end user may not be identified. In addition, as in basic science, the results of applied research must stand up to the test of replicability by disinterested parties. In summary, although the questions being addressed by the committee are important in determining whether the identified techniques offer significant potential applications for the Army, they are not sufficient conditions for entry to applied research. This thought is further developed in the following sections. DECISIONS LEADING TO ADVANCED DEVELOPMENT After a promising concept has been tested for application value and some initial development toward a target application has been made, there are two possible outcomes: either the results prove sufficiently promising to warrant consideration for early development, or they do not. Entering initial development is an important decision, because it means starting a machine that is hard to stop. Specifically, as more and more development money is spent, it becomes increasingly difficult for the decision makers to halt the effort and take responsibility for the "wasted" money. As noted above, the validity of the applied research outcome is a function of many variables, including the quality of the design, control for experimental bias, and so on. In fact, the risk of inaccurate conclusions from applied research is much higher than in basic science, because the experimenter usually does not have the experimental control that is available to the basic scientist. Some of the many problems that plague design of applied research are discussed in Chapter 3 of the report. In

OCR for page 262
APPENDIX E 271 addition to the difficulties of designing and carrying out high-quality applied research, there is another possible source of error in interpreting the outcome of such research. Consider the decision matrix in Figure E-3, which depicts various outcomes from applied experiments based on sound or unsound basic research concepts. Ideally, only concepts with sound foundations would be selected for applied research. Nevertheless, consider the errors represented by C: such an error could be caused by flawed methodology (e.g., Hawthorne effect, nonrandom assignment, and experimenter bias). There is also a possibility that an effect actually due to the experimental manipulation was purely coincidental and was not a true function of the unsound basic science principle on which it was presumably based. Such an outcome would give false testimony to an unsound principle. In basic science, it would be tantamount to lending support to a false hypothesis, because inappropriate operational definitions have been accidentally confounded with causally important variables. In both cases, investigators are misled; however, the applied scientist may be less concerned about such an outcome (causality), because, after all, a functional relation has been demonstrated that has real-world effects. Considering the outcomes of applied research that has been based on sound basic science principles, a parallel event could occur. Outcomes represented by A could in fact be the result of inappropriate applications of sound principles that accidentally happen to generate significant effects. Finally, B represents all failures that are due to methodological short- comings, plus all outcomes based on principles with low external validity, plus all instances in which inappropriate applications were made based on sound concepts. In summary, one important additional requirement Application Outcome l Success ~ Failure Foundation In Basic Sound ~A ~B Science Unsound FIGURE E-3 Decision matrix depicting various outcomes from applied experiments based on sound or unsound concepts from basic science.

OCR for page 262
272 APPENDIX E for applied research is that not only must the methodology be valid, but the application must also be valid. The biggest potential danger for the applied scientist who is seeking useful methods to help in real-world settings is represented by B. because such an outcome would reflect negatively on a sound, possibly applicable concept that simply was misapplied (assuming that basic scientists minimize the chances of C and D). Such an outcome may also incorrectly discourage other investigators from applying the concept. Decisions about entering development early must address implemen- tation issues, because even though there is still a long journey ahead, it is one that should not be initiated unless implementation is judged to be obtainable. Sustainment refers to keeping a given new approach in place. Like implementation, sustainment should be considered before entering development. In the following paragraphs, a sampling of implementation and sustainment issues is presented. Most of them have been included because they might partially determine the chances of survival for several of the techniques reviewed by the committee. 1. If the target user has not already been specified, it must be identified in this stage. In addition, it is important to ensure that the user understands what the product will and will not do. Users do not like surprises, and early expectations-especially for those unfamiliar with a new technol- ogy are usually inaccurate. The user must be informed that the product has potential application value. The user's input must be continuously solicited and exploited. In this regard, it is most useful if the concept has face validity, empirical support, and a variety of other characteristics described in the following sections. 2. With notable exceptions, the Army system is not currently set up for enhancing human performance across the board. Rather, soldiers are trained to meet some standard of performance. One of the main concerns of trainers is to raise the performance level of all soldiers to some standard. Consequently, a disproportionate amount of training time is spent on poor performers, while less time is spent on polishing an excellent performer. Because trainers will use products that help them the most, the chances of implementation and sustainment are greatest if the product provides enhancement for the poorer performers. 3. The term command emphasis refers to substantial support from relatively high places and involves problems of allocation of time and resources. To implement and sustain new techniques often requires that something else be displaced. People may resist new techniques, not because they oppose them, but because they feel they must maintain their resources at the same level to continue doing a good job. Any resulting tough decisions about allocation of resources may escalate.

OCR for page 262
APPENDIX E 273 Consequently, the technique with the strongest support base will have the best chance of implementation and sustainment. This usually involves more than a commander's simply liking a new technique; it usually means that a technique must have empirical support compelling enough to justify cutting back on some other potentially valuable program. 4. The term personnel turbulence refers to the fact that there is significant personnel movement within the Army. Army officers can usually expect to stay in a given position no more than three to four years before moving on to another job. While potentially increasing general knowledge among officers, such movement can also be a source of disruption to the research and development process. For example, a sponsoring agency that was excited about a new technique last year may be indifferent to the same technique this year. New personalities bring new values, new priorities, and new objectives. In addition, some officers may feel that it is the innovators who get promoted, not the people who implement the last commander's innovation. Others may feel that the chance of failure (which inevitably accompanies innovation) represents a risk to their careers. Finally, much time is spent briefing and debriefing key officials about ongoing work. 5. In addition to command emphasis, any new technique must have the support of the final user. In the Army, this probably means the cadre of noncommissioned officers. There are a number of issues involved here. First, as in any organization, there will probably be inertial resistance to new approaches (e.g., Schon, 1969~. Consequently, the noncommis- sioned officers must be convinced that the new technique holds advantages that far exceed any possible additional work. Army leaders work hard and for long hours. They do not have time to spend familiarizing themselves with complex new techniques. Consequently, training the trainer or user and designing straightforward, easy-to-use techniques are important. Finally, certain personality and role-model characteristics of many Army personnel may go against successful implementation and sustainment of any techniques that are construed as nonmilitary, soft, or trivial, even if scientific evidence supports them. Consequently, even the personality of the user may be a significant consideration when figuring the chances of successful implementation and sustainment. 6. It may be useful for persons unfamiliar with the Army to concep- tualize two armies: a peacetime Army and a wartime Army. This is an imprecise distinction at best, because elements of both probably exist in both conditions. Nevertheless, it is important for the applied scientist to distinguish which Army has been targeted. Development, implementation, and sustainment processes for a peacetime Army may be similar to those found in a large business; however, they may be substantially different from those targeted for wartime use.

OCR for page 262
- 274 APPENDIX E 7. In planning for implementation, it is important not to overlook practical considerations. For example, consider the applied scientist who takes a new weapon simulator to the range, only to find that there is no electricity; the investigator who asks the combat infantryman to carry the small (S-pound) electronic aid along with the 48- to 72-pound gear he is already carrying; or the researcher who provides a soldier with a fragile, battery-operated electronic device for improving land navigation. 8. It is also important to consider the organizational implications of presenting a new product. Is there time in an already busy schedule? Is there physical space available for using and storing the product? Are there security implications? Is there enough flexibility in the product to accommodate personnel surges, as in national mobilization? Is the product compatible with other currently existing approaches, products, doctrine, policies, and so on? Can the Army afford to implement this product in a way that would really have an impact? How, when, and where can the product be made accessible to the real user? These are all important organizational considerations that will partially determine the success of an implemented product. 9. Finally, it is important to consider the human implications of presenting a new product. What are the documentation requirements? What are the training implications? A common mistake made by devel- opers is to assume that documentation will always be available, that any soldier is capable of using their product, that the system will absorb any new product, and that there is always a training cadre that is expert in the area of application. These assumptions may not always be true. It is far safer to start extensive communications with the target user and determine the human requirements (the Navy and the Army HARDMAN methods and the Army MANPRINT program are good examples of this approach). In summary, although the above issues are related to implementing and sustaining developed products, they are also important considerations at the earlier stage. If there are any significant foreseen problems that cannot be overcome, then the development stages should not be entered. These points are raised in this context because it is possible that some of the techniques under review might contain elements that would be difficult to implement for various reasons. DECISIONS LEADING TO ENGINEERING DEVELOPMENT If the early development process is successful, the corresponding evaluations of effectiveness show significant effects, and the implemen- tation path looks promising, then chances are high that a project will

OCR for page 262
APPENDIX E 275 pass to the advanced development stage, at which an engineering design package is produced and the product to be implemented is finalized. More than ever, it is important to consider the needs and concerns of the target users. The user should be given detailed updates on different features of the product. It is useful to provide the target user with a prototype for informal test, evaluation, and comment. Until this stage, it is quite possible that the product has been developed in a vacuum, without much attention paid to the final context. While desirable at earlier stages, it is necessary at this stage of development to consider the context: environ- ment, personnel, schedules, existing equipment, software, space, degree of hardening required, and transportability. The developers should always remember that what might seem to them to be an insignificant detail might be a very important feature to the user. It is also important to sell the user on the usefulness of the product and to help him or her sell others. Full-scale cost-effectiveness evaluations conducted by impartial parties should provide input to the final decision about whether to proceed with procurement. As in all steps, political and funding considerations can have an impact on a developing product. DECISIONS LEADING TO IMPLEMENTATION As noted by Pressman and Wildavsky (1973), there is a general lack of published analytic work dealing with implementation issues. For an excellent account of the implementation of Army products from different perspectives, see Drucker ( 19761. There is a multitude of decisions to be made with regard to implemen- tation. One important subset involves complex decisions about vendor selection. Another major subset involves complex decisions about logistics (e.g., how many are needed, how will they be distributed, how will they be maintained, replaced, and so on). These two major subsets of decisions are not discussed here, because they are very complex and not particularly relevant to the theme of this discussion. Because they are important, visible, and involve financial considerations, procurement and logistics have often overshadowed other issues in implementation, those dealing with whether purchased products are actually implemented in a useful way. One such subset of questions was presented above, in the section on decisions leading to applied science and early development. Also important are steps that must be taken at the time of implementation. The implementation should be overseen. The ideal implementation team would include a member of the design team or at least someone familiar with the development process who knows how the product was intended to be implemented as well as answers to inevitable questions

OCR for page 262
276 APPENDIX E about why implementation is to proceed a certain way. Special demands, situations, and circumstances inevitably surface for different user groups; therefore, it is extremely advantageous to have noted these variables early in development rather than to try to adapt a finished product. In either case, a new technique has better chances of implementation if a certain degree of flexibility can be built into it without sacrificing quality. Taken as a whole, the quality of the decisions discussed above sets the stage for implementation and determines the potential for the new product or technique. The critical issue still remains: Will the product be adopted by the real user, and will the user continue to use it? Army users are functional. If something works and causes little or no additional labor, they will adopt it and continue to use it. Consider the noncommissioned officer, for example, who is often the agent for implementing a change that has been decided on at a higher level. Noncommissioned officers are continuously bombarded with changes, some of which are not explained and some of which they may perceive as misguided. Never- theless, they tend to become strong advocates and defenders of any techniques that are proven to be useful. Regardless of how carefully the stage is set, how well a product is implemented and sustained depends on whether the human users want it. If they perceive the product as satisfying a real need, reducing work, or increasing chances of survival, then implementation and sustainment take care of themselves. If not, no amount of planning and implementation will be sufficient. DECISIONS LEADING TO SUSTAINMENT There seems to be an intrinsic problem involved in implementing and sustaining psychological products. Conversely, there seems to be some- thing about physical products that encourages their use and extends their survival, if they work. Possible explanations for this phenomenon are that the Army demands accountability for physical equipment (e.g., signatures in a property book) or that physical things are easier to brief people about and demonstrate. Seward Smith and Art Osborne at the U.S. Army Research Institute's Fort Benning Field Unit tell stories based on decades of experience in implementing and sustaining marksmanship programs based on sound psychological principles of learning (Smith and Osborne, 1981; Osborne, 1981; Osborne and Smith, 19841. One of the stories is especially relevant to this appendix. Some time ago, decisions were made that virtually eliminated precise feedback about shot location. This was probably an unintended side effect of moving to more realistic "field fire" techniques, in which realistic targets are randomly raised and fall when hit. Hence, feedback on hits and misses is provided, but it is not meaningful feedback

OCR for page 262
APPENDIX E 277 for the very good (all hits) or the very poor (all misses) shooters. Substantial efforts have been made by Smith, Osborne, and their col- leagues at Fort Benning to increase the amount of feedback in marks- manship training. One of many remedies included a downrange feedback exercise, in which soldiers fire rounds at fixed targets at real ranges. Next, after all the necessary safety precautions, soldiers move downrange to personally inspect shot location. The targets are large enough that they capture most rounds and represent realistic silhouette paper targets (black target on white background). Soldiers examine their shot groups and then put black markers on white paper (misses) and white markers on black paper (hits) and return to the firing line. The reverse markers help the training cadre and the soldiers detect any trends in overall grouping over cumulative groups. On the final trip downrange, soldiers cover the holes so that the next person has a fresh target. This exercise remains one of the few opportunities for a soldier to get precise shot location at actual ranges. Notice that the soldier never receives feedback about a specific shot, only about a small group of shots (usually five). This is to be applauded as a simple, yet elegant solution to an existing problem. However, there have been occasions on which the range was visited only to find all black or all white markers on the targets because the range personnel ran out of one color, or worse, to find no markers because they ran out of both colors or did not understand the significance of the exercise. The point is that, for a variety of possible reasons, the implemented technique was not properly sustained. Feedback is not an issue clouded by controversial and conflicting results from basic science laboratories. Feedback is not a politically charged issue. Feedback is recognized as very important even to the uninitiated. The feedback technique described above does not require significant funding or time, yet there have been problems in sustaining this relatively simple technique. This fact is potentially important when considering the various techniques described in this report. One possible solution is to ride on the back of computer hardware technology. Computer-assisted training, instruction, performance aids, and so on allow psychological principles to be incoporated into hardware and software. That is why Smith, Osborne, and their colleagues are excited about new approaches made possible by microcomputers and other technological advances. Such technology may provide excellent transfer vehicles for various techniques. For example, rifle simulators allow safe, realistic practice with precise shot location and other kinds of feedback that were not possible previously (Schroeder, 19871. Location of misses and hits (LOMAH) technology provides precise shot location

OCR for page 262
278 APPENDIX E about real bullets at actual range. The computer allows psychological principles to become more demonstrable. It allows a way of standardizing information by putting it in inaccessible computer code thereby en- suring that there are no distortions in the delivery of the technique and avoiding the assumption that there will be experts in the field to deliver the technique. Such technological advances are surely not the solution to all appli- cations. Although they seem to be an excellent partial solution for many implementation and sustainment issues in a peacetime Army (e.g., training), they may not be appropriate in wartime applications (i.e., there is only so much equipment a soldier can carry into battle see Vogel, Wright, and Curtis, 19871. Overdependence on technology by the system or the soldier is a real concern. CONCLUSIONS The purpose of this appendix is to provide the reader unfamiliar with the Army's research and development procedure a better context for the committee's work. Information about the scientific basis for new concepts and techniques is crucial for Army decision makers. Solid scientific support is a necessary condition; however, it is but one of several gates through which a technique must pass before it is utilized in a meaningful way. The evolution from the basic science laboratory to a useful Army product was shown to be a relatively complex process. The major steps involved in Army research and development were identified in Figure E-1 to be basic research, applied research and exploratory development, advanced development, and engineering development. As scientists, we tend to concentrate on the research islands along the path and pay too little attention to the potentially dangerous waters in between. In this appendix, special attention was paid to the various decisions that must take place between the formalized research steps. The account presented is informal, based on personal experience with the system. While not a rigid procedure, the questions and transitions identified should be con- sidered by scientists and engineers along the entire continuum. The various techniques discussed by the committee fall at various points along this continuum. For example, sleep learning, brain asym- metry, and parapsychology appear to be having trouble clearing the first hurdle. Another cluster of techniques appears to have support or to contain elements that have support from basic science. These techniques are currently at various stages of applied research and early development, in which investigators are attempting to find the optimum combination

OCR for page 262
APPENDIX E 279 of elements and target audiences to achieve a meaningful application (i.e., influence strategies, stress management, biofeedback, accelerated learning, and mental rehearsal). Finally, because of the Army's historic interest in group cohesion, work on cohesion can be considered in the advanced development or implementation stages. A recent instance of this interest is the Army's COHORT program. Regardless of where a technique currently falls on the continuum, some of the questions posed are relevant to its future success. For example, user acceptance is seen as a problem for brain asymmetry, neurolinguistic programming, biofeedback,- accelerated learning, and parapsychology . It is recommended that scientists along the entire continuum become more familiar with all phases of research and development as generally discussed in this appendix. More communication is needed at all stages of development among the basic scientist, the applied scientist, the engineer, and the user. The reviews of scientific support for these techniques, which are provided in this report, are critical and necessary; however, given a scientifically defensible foundation, similar reviews by applied scientists and potential users are equally important. While the current model of research and development as described above is a linear, sequential procedure, perhaps other models should be considered, for example, a parallel model. The current model implies little or no feedback to earlier stages and could partially explain the relative lack of communication that often exists among the parties. It is impossible for the basic scientist to anticipate all the questions that could become relevant to the applied version of his or her work. Similarly, it is impossible for the applied scientist to anticipate all the ways in which the product could be used. Finally, it is impossible for the user to recognize and identify all the uses or features of a product until he or she has become familiar with its early forms. I suggest that many excellent existing products have i.n fact resulted from a parallel research and development approach, which emerged when the user handed a product back to the applied scientist with suggestions, or when the applied scientist referred fundamental questions back to the basic scientist for clarification. Current knowledge about the techniques discussed and their potential applications is sufficiently limited that a parallel research and development approach may be a better strategy than the linear one. REFERENCES Allen Corporation of America 1985 Proceedings of Technology Transfer Workshop. (Conducted under contract to the U.S. Office of Personnel Management.) Alexandria, Va.: Allen Corporation of America.

OCR for page 262
280 APPENDIX E Crawford, M.P 1970 Military psychology and general psychology. American Psychologist 25:328-336. Drucker, A.J. 1976 Military Research Product Utilization. Research product review 76-15. Alexan- dria, Va.: U.S. Army Research Institute for the Behavioral and Social Sciences. Gruber, W.H., and D.G. Marquis 1969 Factors in the Transfer of Technology. Cambridge, Mass.: MIT Press. Kahneman, D., and A. Tversky 1979 Intuitive prediction: Biases and corrective procedures. In S. Makridakis and S.C. Wheelwright, eds., Forecasting. Vol. 12 of TIMS/North-Holland Studies in Management Sciences. Amsterdam: Elsevier, 313-317. Lichtenstein, S., and B. Fischhoff 1977 Do those who know more also know more about how much they know? Organizational Behavior and Human Performance 20:159-183. Lichtenstein, S., P. Slovic, B. Fischhoff, M. Layman, and B. Coombs 1978 Judged frequency of lethal events. Journal of Experimental Psychology: Human Learning and Memory 4:551-578. Linstone, H.A., and M. Turoff 1979 The Delphi Method: Techniques and Applications. Reading, Mass.: Addison- Wesley. Morton, J.A. 1969 From research to technology. In D. Allison, ea., The R&D game. Cambridge, Mass.: MIT Press, 213-235. Osborne, A.D. 1981 The M16 rifle: Bad reputation, good performance. Infantry 71(5):22-26. Osborne, A.D., and S. Smith 1984 US Army FC 23-11: Unit Rifle Guide. Research product 85-12. Alexandria, Va.: U.S. Army Research Institute for the Behavioral and Social Sciences. Pemn, E.C., and H.C. Goodman 1978 Telephone management of acute pediatric illnesses. New England Journal of Medicine 298:130-135. Pressman, J.L., and A.B. Wildavsky 1973 Implementation: How Great Expectations in Washington Are Dashed in Oakland; or, Why It's Amazing that Federal Programs Work at All. Berkeley: University of California Press. Rosenthal, R. 1966 Experimenter Effects in Behavioral Research. New York: Appleton-Century- Crofts. Schon, D.A. 1969 The fear of innovation. In D. Allison, ea., The R&D game. Cambridge, Mass.: MIT Press, 119-134. Schroeder, J.E. 1987 Overview of the development and testing of a low-cost, part-task weapon trainer. In R.S. Stanley II, ea., Proceedings of the 1987 Conference on Technology in Training and Education. Colorado Springs: American Defense Preparedness Association, 200-209. Seurat, S. 1979 Technology Transfer: A Realistic Approach. Houston: Gulf Publishing.

OCR for page 262
APPENDIX E 28 Slovic, P. 1972 From Shakespeare to Simon: Speculations-and Some Evidence-About Man's Ability to Process Information. Research Monograph 12, Oregon Research Institute. Smith, S., and A.D. Osborne 1981 Troubleshooting rifle marksmanship. Infantry 71:28-34. Sterling, T.D. 1959 Publication decisions and their possible elects on inferences drawn from tests of significance-or vice versa. Journal of the American Statistical Association 54:30-34. Tversky, A., and D. Kahneman 1974 Judgement under uncertainty: Heuristics and biases. Science 185:1124-1131. U.S. Department of the Army 1981 Organization and Functions of the US Army Research Institute for the Behavioral and Social Sciences. U.S. Army regulation 10-7. Vogel, R.J., J.E. Wright, and G. Curtis 1987 Soldier load: When technology fails. Infantry 77(2):9-11. Weinstein, L.F. 1986 The ivory tower and the real world: A graduate student's perspective. Human Factors Society Bulletin 29(12):8-9. Zeleny, M. 1982 Multiple, Criteria Decision Making. New York: McGraw-Hill.