National Academies Press: OpenBook

Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium (1987)

Chapter: Cognitive Factors in the Design and Development of Software in the Space Station

« Previous: Change in Human-Computer Interfaces on the Space Station: Why it Needs to Happen and How to Plan for It
Suggested Citation:"Cognitive Factors in the Design and Development of Software in the Space Station." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 176
Suggested Citation:"Cognitive Factors in the Design and Development of Software in the Space Station." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 177
Suggested Citation:"Cognitive Factors in the Design and Development of Software in the Space Station." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 178
Suggested Citation:"Cognitive Factors in the Design and Development of Software in the Space Station." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 179
Suggested Citation:"Cognitive Factors in the Design and Development of Software in the Space Station." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 180
Suggested Citation:"Cognitive Factors in the Design and Development of Software in the Space Station." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 181
Suggested Citation:"Cognitive Factors in the Design and Development of Software in the Space Station." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 182
Suggested Citation:"Cognitive Factors in the Design and Development of Software in the Space Station." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 183
Suggested Citation:"Cognitive Factors in the Design and Development of Software in the Space Station." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 184
Suggested Citation:"Cognitive Factors in the Design and Development of Software in the Space Station." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 185
Suggested Citation:"Cognitive Factors in the Design and Development of Software in the Space Station." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 186
Suggested Citation:"Cognitive Factors in the Design and Development of Software in the Space Station." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 187
Suggested Citation:"Cognitive Factors in the Design and Development of Software in the Space Station." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 188
Suggested Citation:"Cognitive Factors in the Design and Development of Software in the Space Station." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 189
Suggested Citation:"Cognitive Factors in the Design and Development of Software in the Space Station." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 190
Suggested Citation:"Cognitive Factors in the Design and Development of Software in the Space Station." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 191
Suggested Citation:"Cognitive Factors in the Design and Development of Software in the Space Station." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 192
Suggested Citation:"Cognitive Factors in the Design and Development of Software in the Space Station." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 193
Suggested Citation:"Cognitive Factors in the Design and Development of Software in the Space Station." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 194
Suggested Citation:"Cognitive Factors in the Design and Development of Software in the Space Station." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 195
Suggested Citation:"Cognitive Factors in the Design and Development of Software in the Space Station." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 196
Suggested Citation:"Cognitive Factors in the Design and Development of Software in the Space Station." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 197
Suggested Citation:"Cognitive Factors in the Design and Development of Software in the Space Station." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 198
Suggested Citation:"Cognitive Factors in the Design and Development of Software in the Space Station." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 199
Suggested Citation:"Cognitive Factors in the Design and Development of Software in the Space Station." National Research Council. 1987. Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium. Washington, DC: The National Academies Press. doi: 10.17226/792.
×
Page 200

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

LIVE FACIt:)RS ~ TO DESIGN AND DEVE=~T OF SOlIW~ IN THE SPACE STATION Peter G. Polson Achievement of the cperational arxt productivity goals for the Space Station will rewire expensive use of a wide varied of ~uJcer-bas~ systens ranging freon application programs that Nan on gene al pus ~ se work stations to specialized embedded computer systems that monitor, operate, and trouble shoot critical subsyst Is, e.g., environmental and pawn' control systems (Anderson and Chambers, 1985; Johnson et al., 1985). However, improperly designed user interfaces for these systems will compromise these goals. The objectives of this chapter are to characterize major problems involved in the design of human-computer mterfa~=c for systems on the Space Station and show how systematic application of empirical and theoretical results and methodologies from cognitive psychology and cognitive science can lead to the development of ~nterfa~=c that reJu~- training cost and enhance space station crew productivity. This chapter focuses on four issues: 1) transfer of user skills, 2) comprehension of complex visual displays 3) human-computer problem solving, 4) management of the development of Ale systems. ~BLEMS Transfer of User Skills Inconsistent user interfaces in which the same basic function is performed by several methods In different contexts resume= transfer and interferes with retention (Poison, 1987; Postman, 1971~. m e Spare Station's numerous computer-based systems and applications programs will be developed by different organizations over a period of many years. Inconsistency will be the rule rather than the exception unions extrao ~ y measurer are taken in the design of user-~nterfaces for these systems. Popular and pcwerfu~ applications programs developed for personal computers could be realistic models for software developed for the Space Station. The typing popular applications program for a personal computer has been developed by an independent organization; the program has a great deal of functionality which is the reason for its commercial success. 176

177 m e user interface is unique to the application being embedded in the application's code. Effective use of the application requires specialized training and several weeks of experience. There is no consistency across different popular applications. For example, they can have very different methods for editing operations on a text string. Emus, editing an axis label on a graph, editing an operating system command, or modifying a line of text with an editor all require different sequence= of user actions. The Comprehension of Complex Visual Displays Complex visual displays using graphics, color, and possibly motion will be used in the space station to present various kinds of information to crew members carrying out complex tasks. Poorly formatted, poorly organized, arx] difficult to comprehend displays will have negative impacts on the pr~uctivi~r. Such displays increase training costs, difficulty of ccrnplex tasks, anti probability of serious operator errors. mere exists extensive knowledge of the processes involved In the perception of basic visual properties like color and form (Gus ham, 1965; Walraven, 1985), and there are numerous guidelines for display layouts and use of symbols and color (e.g. Smith and Moser, 1984; Kosslyn, 1985). However, there is no systematic knowledge of how people comprehend complex displays or use the information presented in such displays to perform complex tasks. There are no general principles for the development of effective complex displays. Human-Cc mputer Pro bleat Solving Nabs ha= extremely ambitious plans for the use of artificial intelligence and robotics in the space station. The proposed application areas include information management, life support systems operations and monitoring, electrical power systems operations and monitoring, and guidance and navigation. Many of these tasks on the Space Station will be performed by systems with significant embedded intelligence in order to satisfy mission, technological, and economic constra Mets and to achieve productivity goals (Anderson and Chambers, 1985). The a-=- of artificial intelligence techniques can significantly increase the complexity of a system from the point of view of its human user. The crew member must now understand both the task performed by the system as well as the Characteristics of Jche "intelligent" control prearm (Hayes, 1987~. Waterman (1986) notes that expert systems are "brittle" Men pushed beyorxt the very narrow ocean of their gal expertise can fail with little or no warns. Uncritical use of the current state of-th~t in eat systems' technology Mule decrease productivity of the crew and Manger their safety. Achievement of NASA's plans for the applicatior~s of artificial intelligence in the

178 Apace station will rewire extensive basic research arxi rapid advances e state~f-the-art. SOLUTIONS Four solutions are propose for the problems outlined in the pry sections: 1) Use of information pressing meets of tasks In the design pa ~ ss, 2) allocation of ad ~ ate resour ~ to user-interface development, 3) use of user interface management systems, and 4) use of existing expertise On NASA. Detailed Information-processing Models The first, an] most important, solution is that designs for applications programs, complex visual displays, and cooperative human-oomputer problem solving systems be based on detailed, information-processing models of cognitive processes involved ~ the performance of specific tacks. Information-processing models describe the knowledge, cognitive operation_, and user action_ r ~ ed to _~_m ~ - _~1' The== m~01 C ~~ - =1 Cal he 11C~ - ^ - ~~_~= - I; I; One ~LLvLlll ~ J. 111~= Il~=l~ ~l ~~ ~ ~ ~v- My ~t ~ ~ ~ - - ~ Van of usability parameters' e.g. training time, productivity, and ment=1 work load, and they can be used to isolate design flaw_ An proposed versions of a cc~nputer-based system. Information-processing Myers describe that ~ ransfers, the knowledge necessary to perform the task, and thus they can be used in ye design of consistent user interfaces bat facilitate transfer of user skills. Information~pr~xessing Myers can make important con~cributions to the develop ~ nt of effective ~ plex visual displays. The models describe both the knowledge necessary to successfully complete a task, what is to be displayed, and the processes involved in extracting that knowledge from displays, how it is to be displayed. ~nformation-processing models are an important component In the successful development of effective human-oomputer problem solving systems. There is general agreement that successful human-oomputer problem solving systems will incorporate models of the task and the user (Hayes, 1987~. Current theoretical methodologies in cognitive psychology and cognitive science can be used to develop both kinds of models. Management of the Design Process The second solution involves suocessfu' management of the development process for computer-based systems. +~^ ~ ~:~_, ~.~1~ ~~- ~ ~ ~~= ~ lle ~ )~-L~L ~GVCl~J—llCll~ ,~' Vie>= for complex camputer-based systems In the military, NASA, end the civilian sector does not allocate enough resources to ~~hili~ considerations. The primary focus of the process is on developing a system with specified functionality. h~nctiona~ ity is necessary but not sufficient for usability. Usability, training time ark]

179 productivity, is typically evaluated late in the design cycle when itis far too late to make changes that improve 1l-c~hiligy. The design of highly productive complex camputer-based systems requires solving simultaneously two interrelated sets of design problems involving functionality and usability. What is proposed in this chapter is that ~~.~ hility and functionality considerations receive equal weight during all phases of the design cycle. The preliminary version of the system is evaluated for ility. If the system fails to meet usability goals, the design is revised. The revise] design is then evaluated. This iterative process continues until the design meets both usability and functionality goals (Gould and Lewis, 1985; Hayes, 1987~. User Interface Management Systems The third solution involves the use of appropriate technologies. Many of the problems involving transfer of user skills and consistency across applications can be solved using user interface management systems. m e nature of these systems is discussed in Hayes (1987) and Hayes, Szekely, and Turner (1985). They will not be discussed further here. Existing Expertise in NASA The fourth solution involves making effective use of the expertise already within NASA. What is being pro posed here is similar to other modeling efforts currently underway in NASA deal ing with problems of anthropometrics and habitability. OPSI~ (Glabus and Jacoby, 1986) is a computer model that simLlat~c crew actions and interactions carrying out specific tasks under constraints imposed by different interior configurations, crew size and skills and other environmental factors. m ese simulated task scenarios are used to rapidly explore a large number of variables involving the environment and crew composition iteratively develc ping a more optimal design. Detailed models of the cognitive Operations and physical actions required to carry out various types of tasks involving interaction between man and machine can be used in a similar fashion to aptimize designs for user interfaces. Alternative Solutions Guidelines and Handbooks Human factors guidelines (Smith and Mbsier, 1986) and handbooks summarize information ranging from design goals and methodology to specific data on perceptual and Sector processes. Guidelines and handbooks contain parametric information about basic perceptual and actor processes and information on limitations of classes of interaction techniques. However, they are of limited use in

180 characterizing higher-level cognitive processes, e.g. comprehension, learning, and problem solving. Guidelines propose reasonable design goals for cognitive aspects of a system, but they contain little or no advice on how to achieve such goals. Examples of ccgn~tive guidelines include '~Ldn~mize working memory load" and '~Lntmize the amount of information the user Hal to memorize". Utility parameters characterize the use of a system to perform a task, e.g. training time, productivity, and user satisfaction. Develc ping a system that optimizes ~~C~hility parameters requires understanding of the task and the cognitive processes involved An performing the task. Mast features incorporated into user interfaces are not good or bad per sa. Usability is determined by interactions of the specific features of a design with the structure of a bask. Guidelines do net contain necessary information about task structure, the knowledge required to perform a dark, or the dynamic-= of the cognitive processing required to perform the took. Our knowledge of cognitive processes is in the form of detailed information processing models of the performance of complex tacks. Many writers (e.g. Gould and Lewis, 1985; Hayes, 1987) argue that successful interface design is an iterative process. This view is strongly championed In this chapter. It is not possible to derive an optimal interface from first principles. Accumulated experience, information in guidelines and handbooks, and careful th~n~-etical analyses can lead to the develcpment of a reasonable initial trial design. However, this design has to be evaluated, modified, and evaluate] again. In other words, guidelines and handbooks are not enough. Empirically BACH Modeling Strategies Gould and Lewis (1985) and Carroll and Campbell (in press) seriously question the theoretically driven design and evaluation processes championed in this chapter. They argue that there are serious limitations of current modeling techniques, e.g. the limitations on cur knowledge of comprehension of complex visual displays. They champion empiri~lly-based modelling and evaluations methodologies. Many successful, complex systems, e.g. today's generation of highly automated aircraft, evolved from a combination of increasing technical capabilities, e.g. highly reliable microprocessors, and expensive operational experience (Chambers and Nagel, 1985). However, relying on empirical methods to evaluate trial designs has serious limitations. m ey include difficulties in extrapolating results, doing experiments to evaluate complex systems, and evaluating transfer of training. For example, in a very complicated system, it may not be feasible to do empirical studies to evaluate a large number of tasks or to evaluate transfer between many tasks. If the current version of a trial design has unacceptable usability parameters, a designer has the very difficult Kayak of deciding what attributes of the current design should be changed in order to improve performance. A theoretic=] model provides an explicit decomposition of the complex

181 underlying processes. This a~(litional detail describing the underlying pro can be very valuable In making well motivated changes leading to Me next iteration of the design process. OUTLINE OF AIDER OF COPIER The remainder of this chapter is organized into five sections. The first provides a general characterization of the kinds of theoretical Gels of cognitive processes ~t we argue should be the - his for the design of highly Ale Outing sys~cerns. The nest section describes a detailed analysts of ache process involved inches transfer of user skills and presents s=ries of empirical results supporting these theoretical analyses. This section also praises a description of current theoretical Models of human~uter interaction. Transfer is a well understood problem. - ~ ~ ~ ~ ~ . . . lhe objective of this lord secc~on Is No provide an ~~trat~on or a su~r==sful solution. The next section describes some of the difficult problems involved in the design of effective complex visual displays. me fourth section discusses the problems involved in the develcpment of effective cooperative manrrachine systems. The final section mates recc=mendations for further research. MODISTE OF ~ EXCESSES the information processing framework (Newell and Simon, 1972; Gardner, 1985) provides the basis for the develcpment of detailed process models of tasks performed on the Space Station. These theoretical analyses can be `:=P~ as the basis for the design of human-computer 1nterfa~= that have minimal training costs and for the task and user models incorporated into human-co mput~' problem solving systems. The Information Prccessing Framework An information processing model ~ncorpo~= representations of the task, the knowledge required to perform the task, and the processes that operate on the representation to perform the bask (Gardner, 1985~. Such models are often formalized as computer simulation programs. The framework characterizes the general architecture of the human information processing system which in turn constrains the nature of the representations and the processes that operate on them, e.g., limited immediate memory. Newell and Simon (1972) and Anderson (1976, 1983) have pro posed that the human information processing system can be describe] as a production system. The following section describes production system models of human-computer interaction.

182 M~els of Wean Outer Interaction The GO Feed (Card et al., 1983) and Cognitive C~lexi~r merry (ccr) (Kieras arc] Poison, 1985) both characterize the knowledge necessary ~ mace effective, routine use of software Cools like an Operating system, a text editor, or a data-base manager. me GCm formalism describes the content and structure of the knowledge underlying these skills. ccr represents this knowledge as production rules which permits one to quantify amount. CCT 1nc~-rporat~c all of the assumptions of the GoMS model. m e production rule formalism enables one to derive quantitative predictions of training time, transfer of user skills, and performance. m e next two sections describe each framework. lhe Gem; Model The GONE mated represents a user's knowledge of how to carry out rout We skills in terms of goals, cremations, methods, and selection nlles. Goals represent a user's intention to perform a task, a subtask, or a single cognitive or a physical operation. Goals are organized into structures of interrelated goals that sequence cognitive operations and user actions. Operations characterize elementary physical actions (e.g., pressing a function key or typing a string of characters), and cognitive operations not analyzed by the theory (e.g., perceptual operations, retrieving an item frum memory, or rending a parameter and storing it In working memory). A user's knowledge is organized into methods which are subroutines. Methods goner ate sequences of operations that accomplish specific goals or subgoals. me goal structure of a method characterizes its internal organization an] control structure. Selection rules specify the conditions under which it is appropriate to execute a method to effectively accomplish a specific goal On a given context. m ey are compiled pieces of problem solving knowledge. They function by asserting the goal to execute a given method ~ the appropriate context. Content and Structure of a User's Knowledge The God; model assess that execution of a task involves decomposition of ache task into a series of SUbt;`=kS. A skilled user has effective methods for each type of subta~k. Accomplishing a task involves executing the series of specialized methods that perform each subtask. There are several kinds of methods. High-level methods decompose the initial task into a sequence of subtasks. Intermediate-level methods describe the sequence of functions necessary to complete a subta.ck. Low-level methods generate the actual sequence of user actions n~r~=sary to perform a function.

183 A user's kna~riedge is a mixhwe of task-specific information, the high-level methods, art sys~cern-s~cific Repledge, the law-le~rel methods. The kna~riedge captured In the God; representation describes both general knmrJecige of how the task is to be deco Deposed as well as specific information on how to execute functions require J to complete the bask on a given system. Cognitive Complexity meory Xieras and Polson (1985) propose that the knowledge represented in a GEMS model be formalized as a production system. Selection of production systems as a vehicle for formalizing this knowledge was theoretically motivated. Newell and Simon (1972) argue that the architecture of the human information pressing system can be characterized as a production system. Since then, production system models have been developed for various cognitive proposes (problem solving: Simon, 1975; ~rat, 1983; text c~r~hension, Kier~s, 1982; cognitive skills: Anderson, 1982~. An Overview of Production System M~els A production system represents the knowledge necessary to perform a back as a collection of rules. A rule is a condition-action pair of the form IF (condition) THEN (action) where the condition an] action are both complex. The condition represents a pattern of information in working memory that specifies when a physical action or cognitive operation represented in the action should be executed. The condition includes a description of an explicit pattern of goals and subgoals, the state of the environment, (e.g., prompts and other information on a OK display), and other needed information in working memory. Production Rules and the GoMS MbJel A production system model is derived by first performing a GOMS analyses and then writing a program implementing the methods and control structures described in the GCMS model. Although GCM5 models are better structural and qualitative description of the ~c~riedge nectary to perform tasks, expressing the kn~riedge and processes he production system formation permits the derivation of well motivat - , quantitative predictions for training time, transfer, and execution time for various tasks. Kieras ark] Blair (1986), Poison and Kieras (1985) and Polson et al. (1986) awry others have su~-cctully tracts assumptions underlying these predictions. These authors have Shown that the amount of time

1 184 r~ui~ to learn a task is a linear function of the namer of near rules that ~st be acquired In order to successfully execute the task arxt that execution time is the sum of the execution times for the rules that fire in order to complete the back. They have shown that transfer of training can be Characterized in the terms of Charm nlles. - I~ANS~ ~X OF USER S=T ,T .R In a follow ~ section, rcs ~ on transfer of user skills ~ human-computer interaction will be reviewed. m is research shows that it is possible to give a very precise theoretical characterization to large transfer effects, reductions in training time on the order of three or four to one. m ese results strongly support the hypothesis that large transfer effects are due to explicit relationships between different tasks performed on the same system or related tasks performed on different systems. Existing models of the acquisition and transfer of cognitive skills enable us to provide precise theoretical descriptions of these transfer processes. mese same models can in turn be used to design consistent user interfaces for a wide range of basks and systems that will promote similar large reductions .un training time and saving in training costs. A m eoretica1 Model of Positive Transfer The dominant theoretical approach for explaining specific transfer effects is due to m Orndike and Wood ward (1901) and m orndike (1914~. m Orndike assumed that transfer between two tanks is mediated by common elements. Common elements acquired in a first task that successfully - generalize to a second do not have to be re~earned *tiring the acquisition of the C--r-~nd bask. If a large number amount of the knowledge required to successfully perform the second task transferred, there can be a dramatic reduction in training time. Kieras and Boxcar (1986) and Poison and Kieras (1985) pro posed that a common elements theory of transfer could account for positive transfer effects *tiring the acquisition of operating procedures. The common elements are the rules. Tasks can share methods and sequencer of user actions and cognitive chelations. These shared ccmponents are represented by common rules. It is assumed that these shared rules are always incorporated into the representation of a new task at little or no cast in training time. -thus, for a new task in the m~1e of a training sequence, the number of new unique rules may be a small fraction of the total set of rules necessary to execute this task. Examples of Successful Transfer Thin section briefly describes results from the human-computPr interaction literature demonstrating the magnitudes of the transfer

185 effects and showing how COT (Kieras and Poison, 1985) can explain these rats. Poison et al. (1986) found very large transfer effects, on the order of four to one reductions in training time, for learning to perform a simple utility bask on a menu-based, stand-alone, word processor. Their theoretical analysis showed that a significant portion of the knowledge, when quantified in terms of number of rules, required to perform these tasks were in consistent with low-level methods for making menu transitions, entering parameters, and the like. Singley and Anderson (1985) found large transfer effects between different text editors, e.g., transfer from a line to a screen editor. Poison, Bova~r, and Kieras (1987) found effects of similar magnitude for transfer between two different scream editors. Their theoretical analysis showed that editor= share common top level methods that deccmpcee the task of editing a manuscript into a series of subt~sks involving indivi~1 changes in the manuscript. Furthermore, ever very different editors share low-level methods, e.g., cursor positioning. Text mating is a task where transfer is mediated by knowledge of the general structure of the task as well as shared methods. m e Xerox SIAR is a workstation that was explicitly designed to maximize the transfer of methods both within a given application as well as across different applications (Smith et al. 1983). All commands have a common format. m e user first selects an object to be manipulated using specialized selection methods for different kinds of text or graphic objects. m e operation is selected by pressing one of four command keys on the keyboard. For example, hitting the delete key causes the selected object to be deleted. Ziegler et al. (1986) carried out transfer experimeents with the SIAR ~ . · . . . . . . . _ · . · · . . . workstation. m ey studied transfer between text and graphics editors. m ey showed that common methods acquired in one context were successfully transferred to the other leading to very large transfer effects. Further, they were able to provide a quantitative analysis of the magnitude of these transfer effects using a production system model like those of Polson et al. (1987). An Example of the Impact of Low Level Inconsistencies Karat et al. (1986) examined transfer between three highly similar word processing systems that were intended by the Or designers to facilitate the transfer of user skills frown one system to another. The first system was developed as a menu-basc5, stand alone word processor. A major goal in the design of the follow-on systems was to facilitate transfer from the dedicated/ stand-alone/ word processor to word processors hooted on a general purpose persona Q computer and a departmental computing system. Karat et al. evaluated the magnitude of transfer effects from the dedicated version of the system to the other two system environments. The transfer effects were disappointingly small. Karat et al. found users' difficulties transfers mg their skill were due august entirely to subtle differences in low level-methods. For example, many problems

186 were caused by the fact that the dedicated version of the system has specialized, labeled faction keys. C>n the genera purpose personal Sprouter and Ache depar~taQ c~ui:er system versions, the user had to yearn and re = In the locations of Ache car ~ porting functions on an unlabeled, generic keyboard. Inconsistencies in key assignments for activating known functions disrupted performance when users attempted to transfer their skills from one version of the system to another. Implications for the Design of Systems in the Space Station m e research reviewed In preceding sections shows that common method= are transferred across harks and application leading to large reductions in training time, on the order of 100% to 300%. However, the Karat et. al. results show that these transfer effects are fragile and can be reduced by minor but arbitrary differences In low-level methods let alone more extensive inconsistencies. For example, the method for centering text is identical on both the dedicated and personal computer versions of the systems except that the centering function is activated on the dedicated version by control-Shift C and by Control-Shift X on the personal computer version. This small inconsistency disrupted the performance of skilled users of the dedicated version forcing them to stop and reefer to documentation to find the correct function key. This Inconsistency was caused by the fact that Control-Shift C already used by many applications programs to abort and return to the top level of operating system. The potential for serious inconsistencies In common Seth ~= across different systems and application in the Space Station is much greater than the example of the tier=" word processing system stied by Karat et. al. They were all developed by a single manufacturer with the explicit goal of permitting transfer of skills developed on Me dedicated version of the system. _,~ TENSION OF REX VISIIAL DISPIAYS Rapid developments in hardware arm software technology permit vie generation and presentation of very complex displays cc~3inir~ text, color, motion, arm complex visual representations. There is liming urxierstar~ing of hear to effectively utilize these new capabilities. There is excessive h~-riedge of the basic visual p ~ cesses urxierlying color and form perception (Graham, 1965; Wairaven, 1985~. Detailed models of the comprehension of complex visual displays do not exist. There is some systematic work on the effective graphical presentation of quantitative information (e.g., Kosslyn, 1985; Tufter 1983~. The widely acclaimed book The Visual Display of Ouantitative Information by Tufte is a collection of design guidelines. Today, development of effective complex displays reties alma st entirely on err~rically-based, iterative design methods (Gould and Lewis, 1985~. A good illustration of how effective these methods can be is shown in an experiment reporter by Burns et al. (1986~. These

187 ~`investigators were concerned with tile problem of display format optimization. They designed a set of alternative displays to be used in orbital maneuvering tasks onboard the Space Shuttle. The new displays grouped information by function and include more meaningful abbreviations and labels. Burns et al. (1986) had both non-experts and Space Shuttle crew members retrieve specified items of information from the current, operational displays and the reformatted experiment=] displays. Reformatted displays improved both speed and accuracy for the non-expert subjects. m e changes in format had no effects on Space Shuttle crew member performance, and the reformatted displays improved the ~ accuracy. These results are surprising. Extensive tram ng and experience should have enabled the crew members to develop specialized skills to deal with even non-optimal displays. Any changes in display format should have disrupted these skills leading to reductions An performance for highly trained crew members. One possible conclusion is that the current displays are so for from optimal that even brief experience with the reformatted displays enabled trained crew Beakers to perform at a level equal to their performance with actual displays. The Burns et al. (1986) experiment shows that application of our current knowledge of visual perception and guidel ~ es for formatting and labeling can lead to significant improvements of performance in an emp~ri~lly-based iterative design process. However, the situation in the Space Station is more complex. The display technology for the systems onboard the Space Shuttle used small, alpha-numeric CRIB. Displays onboarS the Space Station will make extensive use of graphics and color. In other words, increase capabilities provided by new display technology will enable developers to generate truly incomprehensible displays. Furthermore, there are important transfer and consistency issues. Conflicting uses of symbols, color and motion cues, and inconsistent formats across applications will have the same impact on users as inconsistent methods for entering text, increased training time and probabilities of user errors. Dealing with issues involving more complex displays, consistency, and the use of displays as interfaces to systems with significant embedded intelligence are more complex design problems. The design problems wit ~ have to be solved using the combination of emp~rically-based evaluation methods combined with detailed models of the task and a theory of the comprehension of visual displays. Consider the design problems involved in developing the displays for systems with significant embedded ~nt=1ligence like the Space Station's environmental controls and power systems. Effective displays should be based on 1) an understanding of the knowledge required to successfully perform critical basks, e.g., trouble shoot a malfunction, 2) a characterization of the cognitive process ~ involved in extracting the necessary information from the display, 3) and a description of how the information ~ utilized to complete the tack. In other words, what is requires is a complete theory of the comprehension of complex visual displays.

188 Ellis arm his colleagues (Ellis et al., 1985; Kin, Won Boo et al., 1985) have propose a methodology for the development of effective specialized displays for spatial basks involving control of Ejects In three dimensional space with a full six degrees of fresco, e.g. the JPL Telerobot demonstrator, and Space Station Proximi~r Operations Displays. Ellis and his colleagues propose a design methodology that creates a very tight link between the characteristics of the task, a U~eoretir~l understanding of the perceptual processes, and empiric demonstrations that the displays actually facilitate performance of the task. This design strategy can be generalized ~ all various types of displays and tacks. ~N~rER PEao~ SOLVING NOVA has articulated a very ambitious design philosophy for expert systems to be used on the Space Station calling for the develcpment of cooperative human-computer problem solving systems. Many issues concern mg the design of such systems can be understood fries experience with highly automated commercial aircraft (Chambers and Nagel, 1985), automatic test equipment (Richardson et. al., 1985,, and aut ~ ted control sums fc~rnucl==rnawernlants. ~ Some of the issues are: 1) vigilance of the human operator, 2) safe transition from automatic to Panama modes of operation, 3) maintenance of skills necessary to perform tanks manually, 4) successful completion of a tack after the =111 mm:~; - ctrcl mm h~c ~f=;1c,A ~~. ~~` ~~ __ 5) allocations of functions between man and machine, 6) and the development of truly symbiotic human-co sputa problem solving Systems. Although the basic issue have been identified, there are no well worked out general solutions nor are there any operational examples of symbiotic human ~ uter problem solving systems. Autonomous vs. Cooperative Systems Hayes (1987) distinguishes between conversational/agent and machine/tool-like systems. On a conversational/agent system, the user interacts with an Intelligent agent to accomplish a task. Robots that carry out complex EVA tasks under human supervision and systems with sophisticated natural language ~nterfam== are examples. M~chine/tool-like systems are directly controlled by their users although they can be highly automated carrying out a whole sequence of low level steps without H;reCt intervention. Examples include auto-pilots, automatic test equipment (AIE) and application programs like text mentors and spreads heats. There also ~ a second important dimension, autonomy. Some systems, once initialized by their users, carry out their tack completely autonomously or only make use of the human user as a low level sensor and manipulator. Examples include auto-pilots, LIE systems, and most expert systems. Auto-pilots and ATE systems are not normally considered intelligent. However, they carry out extremely complex

189 tasks autonomously. hey may not be classified as intelligent systems In that they carry out their tasks using well ur~erstoal algorithms. Many expert systems imply the human user as a Icw-level sensor arc] manipulator. the task is carried out autonomously. Ibe user can ask for explanations of the final results or why the system requested a given piece of data in the process of completing the task (e.g., Shortcliffe, 1976). Limitations of Current Expert Systems Intelligent systems c ~ actually complicate the tack of human user, e.g., telercbots and applications with natural language interfaces. Bejczy tl986) shows that intelligent agents can impose additional difficulties for users because they have to understand both the control pro gram and the tack. For example, no natural language interface is capable of responding correctly to unrestricted input. Such interfaces understand a limited subset of natural language and may have no or limited capabilities for reason m g about the task. Thus, even if the user's request is parsed correctly, resulting commands may be an incomplete anchor incorrect sequence of operations necessary to complete the back. Consider the problem of effective handoff from automatic to many operation in a troubleshooting tat , e.g., finding a serious fault in the power distribution system. Current expert systems do not make the transition from automatic to manual cgeration gracefully. Waterman (1986) observes that expert systems have narrow domains of expertise and they have on capability to reason about their limitations. Because they can't reason about their limits, such systems are little use in Resisting a human problem solver once they have failed to find the cause of a serious fault. Thus, the system can fail catastrophically leaving its user with a task of manually diagnosing ~ serious fault. Building a system capable of reason m g about its limits and providing the user with a useful explanations regarding failure is beyond the current state-of-the art. However, it's exactly this kind of capability that is required in a truly cooperative system. In summary, current expert systems are not cooperative problem solving systems. Th the process of perform mg their task, humans serve in a very law level subservient role and when systems fail, they fail catastrophically providing their users with little or no information for the reason of the fat ure and no assistance in continue] efforts to solve the problem. Being able to reason about its own limitation is difficult because of constraints embedded in the funiamcutal properties of current knowledge representation schemes (Jackson, 1986). The rules in current expert systems contain a complex mixture of control knowledge an] domain specific an] general problem solving knowledge. Such systems have no explicit model of dcma~n principles or any specific knowledge of their strategies. Exactly this kind of knowledge is required to produce coherent explanations (Clancy, 1983~. his type of knowledge is also req~i~ to reason about l;~n;tations.

190 Operative ~man~uter Emblem Solvers NPSA's gals are far more Editions Man the devel~rent of autonomous intPllig~t prim solvers with explanation capabilities. It is repeat propose in various NMA docents to develop cooperative or symbiotic h~nan~r~uter pr~iblern solvers (Jason et al. 1985; Arxierson arm Is, 1985). Discussions about the possibility of de~relopir~g such systems have a surprising uniformity. me authors Ire that powerful problem solvers can be developed if systems exploit the complimentary strengths of human and machine permittir~g one to Sate for the weaknesses of the other. The ne ~ issue is function allocation. The discussion of function allocation begins with a general assessment of the strengths and weaknesses of human and computers as problem solvers. m is assessment is ~ the form of a characterizations human and machine components listing the strengths and weaknesses of each. Typical listings are in Johnson et al., 1985, pp. 27-28; Richardson et al., 1985, pp. 47-49; Anderson and Chambers, 1985. What is striking about these lists is the ~ consistency. the following is taken from Richardson et al. (1985, pp. 47-49). the strengths of the human component of the system are: 1. Processing of senso ~ data. 2. Pattern recognition. 3. Skilled physical manipulation but limited physical strength. 4. Tempted metacagnitive skills, e.g. ability to reason about 1imlts of knowledge and skill. 5. Slow but powerful general learning ~ isms. 6. A large, content-addres-c~hle permanent memory. The weaknesses of the human problem solver are: 1. Limited working memory. 2. Limited capacity to integrate a large number of separate facts. 3. Tendency to perseverate on favorite strategies and malfunctions; set effects an] functional fixity. 4 . T.i~; ted induction capabilities. 5. Lack of consistency; limitations on the ability to effectively Mae new information. Fn~tior~al arm motivational problems. T;nlitations on the availability of individuals with the necessary abilities and skills. Limited endurance. 6. 7. 8. The current generation of expert systems and highly autonomous automatic systems, e.g. AIE's make use of human sensory processing, pattern-r~nition, and manipulative skills. Most authors recognize this and point out that their objective In developing cooperative problem solacing systems is to exploit human's cognitive capabilities as well as these lower level skills. Continuing to quote Richardson,et al., the strength of the computer component of the system are:

191 1. Large processing capacity. 2. Large working memory. 3. Capabilities of making consistent mechanical inferences taking into account al' relevant facts. 4. Processing and utilizing large amounts of actuarial information. 5. Capabilities to store and retrieve training and reference material. 6. Availability of system is limited only by reliability of basic computer technology. 7. No motivational or other related problems. The weaknesses of the machine component of the system are I. No or very limited capacity to adapt to novel situations. 2. No or very limited rearm ng abilities. 3. No or very limited m eta ~ tive abilities, e.g., understanding of own limitations. 4. Very difficult to program particularly the current generation of expert systems. Examples of Cooperative Systems The test examples of cooperative systems are intelligent training systems (ITS) (Sleeman and Brawn, 1983; Poison and Richardson, 1987~. The main components of an T1~ are: 1) the expert module or task model, 2) the student module or user model, and 3) the tutor module or explanation subsystem. A cooperative, intelligent problem solving aid has to have real expertise about the task, an accurate model of the other intelligent agent that it is interacting with (the human user), and the capability of coixtucting sophisticated dialogues with the user. Richardson et al. (1985) argue that the machine component should attempt to compensate for known limitations and failure modes that are characteristics of all forms of human problem solving: They are working memory failures, set and functional fixity, inference fat Ares, and attentional limitations. One important role for a cooperative intelligent system would be to reduce information overload by selectively displaying information relevant to the highest priority subcomponent of a bask. Chambers and Nagel (1985) describe the cockpit of a Boeing 747 with its several hundred instruments, indicators, and warning lights as an example of where skilled pilots can be simply overwhelmed by the amount of available information. Plans for highly automated aircraft of the 399Os incorporate selective displays on color CRTs of a small subset of the tote] information about the state of the aircraft that is relevant to the current task. The ability to display relevant information would prevent information overload and augment human working memory by providing an external representation relevant information about the system's state. Other propceals for the role of the computer In a cooperative system focus on its computational capabilities. Memory limitations prevent

192 human users from adequately integrating information about the current state of the system and archival information concerning likelihoods of component fat ures. Thus, the machine takes on the role of files', and inference engine compensating for known general memory aid, _ ~ _ weaknesses in the human information processing system. Possible Scenarios - Serious Problems Ih~e proposals are consistent with the large body of data about the starch and weaknesses of human diagnostic reasoning arm prablem solving. Hawed, implementing these proposals And a functioning system can cause serious difficulties. Consider a situation involving the power distribution system of the Space Station where sever Attracting failures have Curry. The system makes a series of Encore ~ inferences about ache cause of he faults and displays to the human partner information irrelevant to successful solution of the problem. Such misinformation could effectively block successful solution by the human user. It's essentially a set manipulation. The misinformation would be especially damaging if the system were normally su~==ful. Other problems could result if the system makes incorrect inferences four its model of the human user. Assume the system has conclude d, cornily, that is is incapable of independently diagnosing the faults in the pCW=r distribution system. Using its advanced explanation capabilities, it explains to its human partner its understanding of the current state of the power distribution system and various martial A_. ~ _ _~e _!__= '_ _ ~ ~ - I!__ =_ =! ~ I_ ' I._ __ AQUA w~a=~ =~ Al ~~ Al A. In ache Process. , _ _ _ _ _ _ , _ _ _, ~ . · _ · ~ . .. . . . system presents a series or complex ads prays snowing the current state of the power distribution system. The expert human user recognizes a complex pattern of interrelated events and informs the cc mputer of the correct solution to the problem. The system responds by attempting to evaluate the human partner's in put using information contained in its user model. This model has a very detailed description of the limits of the human information processing system, and the system incorrectly concludes that the human partner is Capable of making the correct diagnosis on the basis of such complex input and the solution is rejected. Conclusions __) Many readers may think that the scenario presented in the preceding section is overdrawn. Of ccNrse, NASA would never tolerate the fielding of a system that was capable of effectively overruling a Space Station crew member. However, a system ~ which human users can Override the machine partner compromises the goal of developing truly cooperative human-oomputer problem solving systems. Information Overload, working memory failures, and failures to integrate historical data in making diagnoses are highly probable failure modes of human users. me incorrect inference made by the machine described in the

193 preceding scenario is not unreasonable and would probably be correct in most situations. Experience with intelligence tutoring systems (Poison and Richardson, in press) shows that such cooperative systems are exceedingly difficult to construct. REGIONS FOR MAR RESEARCH AND CONCLUSIONS This section contains information on recommendations for further research and concludes that the difficulties in developing truly productive ccmputer-based systems are primarily management problems. Information Processing Models Peccmmendation 1. Support the development of the software tools required to rapidly develop information processing models of tasks performed on the Space Station. m is chapter has recc=mcnded that information processing models of cognitive processes be the basis for the design of applications programs, complex visual displays and cooperative human-computer problem solving systems. A theoretical technology should be applied on a large scale to solve interface design problems on the Space Station. Unfortunately, the development of information processing models is currently an art and not a robust design technology. Furthermore, these models can be extremely complex simulating basic psychological process in detail (Anderson, 1983). What is required are engineering models (Newell and Card, 1986; Kieras & Polson, 1985). Development of an effective modeling facility is an engineering problem, albeit a difficult one. There are no advance= required in the theoretical state of the art in cognitive psychology. Models of various cognitive processes have to bee integrated into a-single simulation facility, e.g., models of perceptual, cognitive, and motor processes. Higher level languages should be developed that automate the generation of the simulation code and the detail derivation of models. A simulation development system will be required for designers to rapidly develop models of adequate precision for use in a timely fashion in the design process. m e Comprehension of Complex Displays Recommendation 2. Support an aggressive research program on the processes involved ~ the comprehend sion of complex, symbolic displays. Many tasks on the Space Station will require that crew members interact with complicated displays. Examples include monitoring and trcNble shooting of complex subsystems, manipulation and presentation of scientific data, and interacting with expert systems to carry out trouble shooting and maintenance tasks. Rapid advances in computer and

194 display technology will enable designers to develop complex displays making using of symbolic, color, and motion cues. Effective displays that facilitate performance on these complex tasks can have large positive effects on crew productivity. The complexity of the tasks and the freedom given to the designer by the display technology require that successful designs be based on explicit models of how information in such displays is used to perform these Caulks. Develcpment of models of the comprehension of complex displays requires important contributions to cognitive theory. Current research ~ cognition and perception provides a solid foundation on which to build such models. It is possible that models of comprehend ion of complex displays can be based on the extensive body of theoretical results obtained on the processes involved in text comprehension (e.g., van Dijk and Kintsch, 1983~. Excellent work on related problems is already going on within NASAL research programs in this area could be modeled in the work of Ellis and his colleagues briefly described in a preceding section. Human-Computer Problem Solving Recommendation 3. Design and support an aggressive research program leading to the eventual development of cooperative, human-computer problem solving systems. Although the many analyses characterizing cooperative human-computer problem solving are correct, development of a useful cooperative system r y es solutions to unsolved problems in expert system design, artificial Negligence, and cognitive science. A well structured research program would generate many intermediate results, components of the eventual cooperative system, that are useful in themselves on the Space Station. These include robust, high performance expert systems, advanced expla ~ tion subsystems, and various problem solving tools to assist the crew in management of the Space Station systems. Consider utilities of an inspectable expert system and of an inference engine tool. By an inspectable expert system, we mean a system that displays intermediate states of its diagnostic processes during trouble shooting. The expert systems tool presents to the trained user intermediate results of the trouble shooting process using of complex, symbolic displays. Properly designed, such information gives the human expert the information necessary to confirm a diagnosis or take over effectively if the expert system fails. Most current automatic fact equipment simply reports success or failure, e.g., a red light or a green light. An inspectable expert system would be a dramatic improvement over diagnostic systems with such limited feedback. Another useful subsystem would be a inference engine, a tool that combines information about system state with actuarial data on the likelihoods of different failure modes. This system would be designed to enable a skilled human user to do what if calculations and serve as

195 a memory aid reminding the crew member of infrequently occurring faults that are likely to be overlooked. Inspectable expert systems are within the state-of-the-art an] would serve as a very Graceful test bed for research on comprehension of complex symbolic displays and on the design of such displays. An interactive inference engine could be seen as a primitive prototype of a cooperative problem solving system. Both tools can be very useful In an operational environment and both are important intermediate steps in the eventual development of high performance cooperative systems. There are important areas of research In cognitive science that will have to be better developed before it will be possible to build successful cooperative human-computer problem solving systems. These include models of human diagnostic reasoning, cooperative problem solving, and models of the processes involved in generating and comprehending useful explanations. A cooperative system must incorporate an extremely sophisticated model of its human partner which in turn requires a detailed understanding of how humans carry out the specific task performed by the system as well as the general characteristics of the human information processing system and its failure modes. User models are related to the problem of developing student models in intelligent training systems. Although program= is being made in the area of student modeling, there is still numerous important unsolved problems (Poison and Richarson, 1987~. In summary, the design and development of cooperative, human-co mputPr problem solving is the most difficult of the technologist goals related to cognitive science associate] with the Space Station. This goal will only be achieved by a long term, well managed research program. In Reality, It's a Management Problem It is widely recognized that the ambitious productivity goons for the Space Station can only be achieved with extensive use of autcmateJ systems that have effective user interfaces. However, there is a broad gap between good intentions and actual development practice. It is widely recognized today that complex systems developed for civilian, NASA, and military use are far frog the current state-of-the-art in human factors presenting serious problems for their users. Often, design errs are so obvious that applications of simple common sense could lead to the development of more usable interfaces. In the final analysis, development of usable systems is a management problem. Consistent application of the current state-of-the-art in human factors and knowledge of cognitive processes during all phases of the develcpment process would have dramatic and positive effects cn the productivity of the Space Station crew.

196 N= Anderson, J. R. 1976 I~ge, Memory, and Thought. Erlbamn Associates. 1982 1983 Acquisition of cognitive skill. 89:369-406. The Architecture of Cognition. University Press. Axon, J. R., and Is, A. B. 1985 Iran Centered Space Station Design. Hill~ale, N.J.: Lawrence Psychological Review ~ridge, An;: Harvard Bej cay, A. K. 1986 ~ man factors in E pace tale ~ ation. ]= Reactions of the 2nd [nternatio ~ 1 Symposium on Next Generation Transportation Vehicles. Analfi, Italy, June 20-24. Burns, M. J., Warren, D. L., and Rudisill, M. 1986 Formatting space-related displays to optimize expert and nonexpert user performance. Pp. 274-280 ~ M. M~ntei, and P. Orbeton, ads., ~ ings CHI'86 Human Factors in Ccmputer Systems. New York: Association for Computing fiery. Card, S. K., ~bran, T. P., and Newell, A. 1983- The Psychology of Human-Cc mputer Interaction. N.J.: Erlbau~. Carroll, J. M. 1987 Interfac m g Thought. Books/MIT Press. Carroll, J. M., and Campbell, R. L. 1987 Soften mg up hard science: Human-Computer Interaction. Hillsdale, Cambridge, ME: In press. Bradford a reply to Newell and Card. In press. Chambers, A. B., and Nagel, D. C. 1985 Pilots of the future: Human or computer? Computer New: 74-87 . Clancy, W. J. 1983 The epistemology of a rule-based expert system: A framework for explanation. Artificial Intelligence 20:215-251.

197 Ellis, S. R., Kim, W. S., Tyler, M., and MoGrcevy, M. W. 1985 Visual enhancements for perspective displays: perspective parameters. Proceeding of the International Conference on Systems, M~n, and Cybernetics. ,~ Catalog No. 85CH2253-3. November, 815-818. Engle, S. E. and Granda, R. E. 1975 Guidelines for M~n/Display Interfaces. 00.2720. Poughkeepsie, NY: IBM. ~ ~ _ ~ . _ Technical Report TR Garnder, H. 1985 The Mind's New Science: A History of the Cognitive Revolution. New York: Relic Books. Glc bus, A. and Jacoby, R. 1986 Spa~- Station Operational Simulation (OPSIM). Moffett Field, Ck: NASA-Ames Research Center. Gould, J. D. and Lewis, C. 1985 Designing for 1l-c~hility: key pr mciples and what designers think. Communications of the ACM 28:300=311. Graham, C. H. 1965 Vision and Visual Perception. New York: John Wiley and Sons. Hayes, P. 1987 Changes in human-computer interfaces on the space station: Who it needs to happen and how to man for it. Humans in ~ ~ . ~ Autc=ated an] Robotic Space Systems. National Academy of Sciences. Washington, D. C. Hayes, P. J., Szekely, P. A. j and Werner, R. A. 1985 Design alternatives for user interface management systems based on experience with cousin. Pp. 169-175 in L. Borman, and B. Curtis, eds., Proceedings of the CHI 1985 Conference on Human Factors in Ccmput m g. New York: Association for Computing Machinery. Jackson, P. 1986 Introduction to Expert Systems. Workingham, England: Addison-W==tley ~ Relishing Cc ~ any. Johnson, R. D., Bershader, D. and Leifer, L. 1985 Autonomy and the Human Element ~ Space: Final Report the 1983 NAS~/ASEE Summer Faculty Work Shop. Moffett Field: N~SA-Ames Research Center. Carat, J. 1983 A model of problem solving with incomplete constraint knowledge. Cognitive Psychology 14: 538-5S9.

198 Karat, J. Boyes, L. Weisger~xr, S., ark Soft<, C. 1986 Transfer between word passing systems. lip. 67-71 in M. M~ntei and P. Orbeton, eds., Proceedings C~'86 Han Factors In Cuter Systems. New York: Association for Cant Arty. Kieras, D. E . 1982 A m~el of reader strategy for abstracting may ideas frown simple ~hnim=1 prose. Text 2:47-82. Kieras, D. E. and Bavair, S. 1986 The acquisition of pressures from text: A pr~uction-system analysis of transfer of trainers. Journal of Memory ark borage 25:S07-524. Kids, D. E., and Poison, P. G. 1985 Art approach to the formal analysis of user c~les~ity. International Journal of ~n-~6h~ne Studies 22:36S-394. Kin' W. S., Ellis, S. R., Idler, M., and Stark, L. 1985 Visual ~anc~nts for teler~otics: ~r~ive pare ~ ters. Pp. 807-812 in Proceedings of the International Conference on Systems, M~n, and Cybernetics. i.: Catalog No. B5CH2253-3, November. Xosslyn, S. M. 1985 Graphics and human information processing: A review of five books. Journal of the American Statistical Association 80:499-512. Newell, A., and Card, S. K. 1986 The prospects for psychologist science ~ human computer interaction. Human-Cc muter Interaction 1:209-242. Newell, A., and Simon, H. A. 1972 Human Problem Solving. Englewood Cliffs, New Jersey: Prentice-Hall. Poison, M. C., and Richardson, J. 1987 Foundations of Intellligent Tutoring Systems. In press. Hills~ale, NJ: Laurence Er~baum Associates. Poison, P. G. 1987 A quantitative theory of human-computer interaction. In J. M. Carroll, ea., Interfacing Thought. In press. Cambridge, Ma: Bradford Books/MIT Press.

199 Polson, P. G., Bovair, S., and Kieras, D. E. 1987 Transfer between text editors. Pp. 27-32 in P. Tanner, and J. M. Carroll, ens., Proceedings CHI '87 Human Factors in Computer Systems. New York: Association for Computing Machinery. Poison, P. G., and Kieras, D. E. 1985 A quantitative model of the learning and performance of text editing knowledge. Pp. 207-212 in L. Borman, and B. Curtis, ens. in, Proceedings of the CHI 1985 Conference on Human Factors in Computing. New York: Association for Computing Many Poison, P. G., Muncher, E., and EngeIbeck, G. 1986 A test of a common elements theory of transfer. Pp. 89-83 in M. Mantel and P. Orbeton, eds., Proceedings CHI'86 Human Factors in Computer Systems. New York: Association for Computing Machinery. Postman, L. 1971 Transfer, interference, and forgetting. In J. W. King and L. A. Riggs, =~.c., Woolworth and Scholsberg's Experimental Psy~hol~y. New York: Holt, Rier~art, arm Winston. Richardson, J. J. Feller, R. A., Maxion, R. A, Polson, P. G., and DeJong, K. A. . 1985 Artificial Intelligence In Maintenance: Synthesis of Technical Issues. (AFHRI,rR-85-7) Brooks Air Force Rose, TX: Air Force Human Resources Laboratory. Shortcliffe, E. H. 1976 Computer eased Medical Consultations: MYCIN. New York: American Wise Pier. Simon, H. A. 1975 Functional equivalence of problem solving skills. Cognitive Psychology ,:268-286. Singley, K., and Anderson, J. 1985 Transfer of text-editing skills. International Journal of M~n-M~chlne Studies 22:403-423. Sleeman, D., and Brown, J. S. 1983 Intelligent Tutoring Systems. London: Academic Press. Smith, D. C., et al. 1983 Designing the SEAR user Interface. In P. Degano and E. Sandewall, eds., Interactive Computer Systems. Amsterdam: North-Holland.

200 Smith, S. L. and Moser, J. N. 1984 Guidel ~ es for Designing User Interface Software. ~ 'TR-86-278, ~IR-10090. Bedford, Ma: The Mitre Carp. Thorndike, E. L. 1914 The Psychology of Turning. New York: Teachers College. . . Ihorndike, E. L. and Wood ward, R. S. 1901 m e influence of imprcvement in one mentor function upon the efficiency of other functions. Psychological Review 8:247-261. Tufte, E. R. 1983 The 37isua1 Display of Quantitative Information. Cheshire, CN: Graphics Press. van Kijk, T. A., and Kintsch, W. 1983 Strategies of Disburse C~r~hension. Press. New York: Academic Walraven, J. 1985 The colours are not on the display: a survey of non-veridi~a1 perceptions that may tune up on a colour display. Pp. 35-42 in Displays. Waterman, D. A. 1986 A Guide to Expert Systems. Public Cay. Reading, Ma: Addison-Wesley Ziegler, J. E., Vossen, P. H., and Hype, H. U. 1986 Assessment of Learning Behavior In Direct Manipulation Dialogues: Cognitive C~lexit~r Ar~lysis. ESPRIT Object 385 HOEIT, Working Paper B.3.3, Stuttgart, West Germany: F~aunhofer-Institut fur Are itswir~a ft und Organisation

Next: Discussion: Designing for the Face of the Future: Research Issues in Human-Computer Interaction »
Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium Get This Book
×
 Human Factors in Automated and Robotic Space Systems: Proceedings of a Symposium
Buy Paperback | $125.00
MyNAP members save 10% online.
Login or Register to save!

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!