Cover Image

PAPERBACK
$43.50



View/Hide Left Panel

Cognitive Engineering: It’s Not What You Think

STEPHANIE GUERLAIN

University of Virginia

Charlottesville, Virginia


What is cognitive engineering? It is neither brain science nor cognitive science, nor artificial intelligence, neuro-engineering, robot design, nor perhaps a myriad of other scientific fields that may come to mind when you hear the term. Cognitive engineering does, however, include aspects of all of the fields mentioned above and many others, including psychology, anthropology, computer science, design, and systems engineering. Simply put, cognitive engineering is about the understanding and designing of systems that require human intellectual work.

Since almost every human activity involves human intellectual work, it follows that cognitive engineering can be applied to just about any human activity. I’ve even published a paper in a peer-reviewed scientific magazine describing the cognitive engineering aspects of riding a horse in a cross-country jumping competition (Guerlain, 2001). The article got mixed reactions from professional colleagues, some very positive and others that included the likes of, “What’s next, golf?” Actually, cognitive engineering methods could be applied to understanding and improving any sporting activity and have even been applied to just the activity of watching a sporting competition (see, for example, White et al., 2008).

Despite these somewhat “non-engineering” applications, the field of cogni-



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 87
Cognitive Engineering: It’s Not What you Think StePHanie guerlain University of Virginia Charlottesville, Virginia What is cognitive engineering? It is neither brain science nor cognitive sci- ence, nor artificial intelligence, neuro-engineering, robot design, nor perhaps a myriad of other scientific fields that may come to mind when you hear the term. Cognitive engineering does, however, include aspects of all of the fields mentioned above and many others, including psychology, anthropology, computer science, design, and systems engineering. Simply put, cognitive engineering is about the understanding and designing of systems that require human intellectual work. Since almost every human activity involves human intellectual work, it fol- lows that cognitive engineering can be applied to just about any human activity. I’ve even published a paper in a peer-reviewed scientific magazine describing the cognitive engineering aspects of riding a horse in a cross-country jumping competition (Guerlain, 2001). The article got mixed reactions from professional colleagues, some very positive and others that included the likes of, “What’s next, golf?” Actually, cognitive engineering methods could be applied to understanding and improving any sporting activity and have even been applied to just the activity of watching a sporting competition (see, for example, White et al., 2008). Despite these somewhat “non-engineering” applications, the field of cogni- 

OCR for page 87
 FRONTIERS OF ENGINEERING tive engineering has grown up around the the practical need to understand and improve quality, safety, and efficiency in high-impact, complex domains, such as aviation, medicine, and nuclear power control, where poorly designed systems can lead to major accidents. In fact, the field emerged largely in response to accidents in large, complex systems that had seemingly nothing to do with the design of those systems. For example, let’s say a highly trained cockpit crew flies a perfectly good plane into the side of a mountain. This loss of “situational awareness” has in fact happened enough times that the aviation industry has coined a term for this phenomenon called “controlled flight into terrain” (CFIT). In these cases, there was no mechanical failure and the computers and other automation, processes, and so on worked as designed. A first response as to the cause of such accidents is often “human error.” However, by studying the causes of accidents, or even by studying the day- to-day activities of people using systems designed for them, it turns out that many systems are poorly designed to begin with. The computers, automation, and other engineered processes (such as procedures, handoffs during shift changes, log- books, regulatory requirements, and other aspects of passing information among people and computers) have weak spots, and, if certain events co-occur at those points, they can, collectively, cause failure. Ironically, people working day-to-day in such systems often see these failure modes (although they may not think of them in that way) and create “work- arounds,” such as placing sticky notes to remind themselves what to do or not do, or they develop an almost “intuitive” understanding of how to react if and when things start to go wrong. These workers, who are important sources of knowledge, are often overlooked by engineers, who may not have been trained in gathering information well. This is where cognitive engineers excel. Cognitive engineers focus not only on interviewing and observing end users, but also look at intrinsic relationships and requirements of a task. For example, for air-traffic controllers, it is a fact that multiple planes are moving at varying (but constrained) speeds and altitudes, and these facts cannot be “simplified away.” But one can often take advantage of the constraints in system knowledge when design- ing work flow, representations, and other aspects of decision-support systems. Cognitive engineering is a subspecialty of the broader field known as ergo- nomics. When most people think of ergonomics, they think of physical changes to a product or tool to make it “fit” better physically. An early example of this was the design of the Reach toothbrush (Hill and kreifeldt, 1979), which spawned a whole new field of “toothbrush design”—“mouth-friendly” technology designed to improve the task of cleaning teeth as compared to what could be achieved with the straight-handled, rectangular-shaped toothbrush that was then the norm. Both physical and cognitive ergonomics are important, and the same envi- ronment or system can be analyzed and improved upon from both perspectives. Table 1 provides a few examples of how a cognitive ergonomist and a physical

OCR for page 87
 COGNITIVE ENGINEERING TABLE 1 Comparative Examples of Physical and Cognitive Ergonomics Physical Ergonomics Cognitive Ergonomics Human Activity (worker safety and risks) (process safety and risks) Will sitting for 8 hours… …cause back pain? …cause loss of attention? Will excessive noise… …cause hearing loss? …cause operators to miss a request? Do the operator displays… …cause eye strain? …cause a misunderstanding of the situation? ergonomist might analyze the same system from two different perspectives. In general, a physical ergonomist focuses on creating an environment that is safe and does not create physical stress or difficulties for workers in that environment. Cognitive ergonomists focus on creating an environment that maintains overall process safety, for example, by minimizing the chances for human error. Both analyses are important, because improvements in either can significantly reduce downtime by reducing worker injuries, accelerate overall performance time by eliminating extraneous steps, and increase worker satisfaction. Many people claim that ergonomics is just “common sense,” but given the number of engineered systems that are designed without taking into account human capabilities and limitations and that do not truly fit task requirements, I often claim that, “Unfortunately, it’s not that common.” As a simple example, take the challenge of finding all apartments for rent (that allow pets) within 1 mile (e.g., 20 minutes walking distance) of a particular location. This task, which can be easily specified, is almost impossible to achieve using current search engines, not because they can not accomplish the task, but because they are not set up to run this kind of query. Thus users must endlessly search, type, click, move the mouse, zoom, scroll, page, phone, bookmark, write notes, and so on. In fact, you can imagine a well-designed system that would accept such a query and return a map and directions, price, and everything else you might want to know in one easy result that could either be printed out in a logical order for driving (or, in the case of a city, a logical bus route or walking route) or downloaded directly to a GPS system. Thus even when the technology is available, system designers may not really understand user or task requirements, thus creating a system that necessitates all sorts of workarounds and extra investment of time and effort in order to accomplish a task, thereby increasing the possibility for errors or sub- optimal solutions or, equally likely, a giving up by users because the system makes accomplishing a task too difficult. The problem is especially prevalent in the current health care system, where so much effort is required to gather together records and relevant information for a patient (particularly one who has just moved to the area or just been admitted to an emergency room) that doctors most often rely on asking the patient for a health

OCR for page 87
0 FRONTIERS OF ENGINEERING history. Even if all records have been sent to a hospital, the data cannot usually be easily sorted, digested, or summarized. In a large, complex medical record, there may be pages and pages of often repeated information. What constitutes a “patient overview screen” can often be likened to a car dashboard that, instead of giving you the information you need directly while driving the car (e.g., speed, gas remaining, RPMs, “change oil” warnings etc.), gives you a set of buttons that, while driving, you could click on individually to see each of these items should you so choose. This is not an overview; this is a front-end index that requires navigating to a successive set of detail pages each and every time any informa- tion is reviewed. Electronic medical records are just emerging, but they are, unfortunately, not patient-centered. Patients may receive health care in many different places, even if they live in only one state. However, electronic medical record systems are being implemented piecemeal and are usually only integrated within a single health care institution. One mantra of cognitive engineering is to design for data extraction, not just data availability (Hollnagel et al., 1986). Efficient data extraction by people often means pre-organizing data and presenting it a way that lets people use their pat- tern-recognition skills to directly “pick up” on the answer they are seeking in an efficient, “parallel” way (e.g., more data displayed does not require more search time). Thus analog instruments in a car can be read quickly, because the driver only has to look at the dial to see if it is “in the red”; he or she does not have to convert a particular number into the “state” of “the car needs more gas soon.” Similarly, in control rooms, operators often tend to use at least one monitor to display trends in key process parameters, because operators can interpret patterns in those trends to detect important events and then use this overview information to navigate to detailed displays as appropriate. One area of current research in cognitive engineering is designing domain- specific overview displays that directly inform practitioners of the state of the system without requiring that they interpret information scattered across several screens (e.g., see Burns, 2000; Card et al., 1999; Cushing et al., 2006; Guerlain et al., 2002; Smoot et al., 2005). Cognitive engineers consider all inputs, outputs, and decisions that a task requires and then inform or lead design teams to ensure that they understand what should be “automated,” what should be displayed, in what way, and in what order. Cognitive engineers also analyze the design of feedback systems to human operator(s) (Norman, 1990). The autopilot system in an aircraft, for example, does not need to pull back on the yoke to make the plane go up. However, even in autopilot mode, the yoke does “pull back” for the sole purpose of providing feedback to pilots about the changing state of the airplane. The pilot can see the movement of the yoke in his or her peripheral vision while performing other tasks. The pilot does not have to focus attention on a particular dial or instrument panel

OCR for page 87
1 COGNITIVE ENGINEERING to see if a number has changed on that display. Thus when cognitive engineers design a system, they consider many kinds of feedback. Cognitive engineers also consider the context in which a system will be used. If people work in a very noisy (e.g., industrial) environment, then relying on a beep or other sound to get their attention would not be a very good design. Other contexts to consider are the state of the people who will be using the system. How can we design a system that will accommodate all levels of potential users? Can we design the system in such a way that people will be able to use it right away and become better at using it with experience rather than relying on extensive training? In general, design is an iterative process. Cognitive engineers focus on understanding the cognitive requirements and constraints inherent in the system, designing prototypes, testing those prototypes for usability, and iterating on the designs until production. This human-centered design process is often skipped, either because of a lack of knowledge about the cognitive engineering approach or a perceived lack of time or funding. Usability experts may be brought in after implementation of a system, but it may be difficult to make changes at that point. In fact, a usable system meets the actual requirements. Figure 1 shows how far down in the process implementation should start and how early in the process task analysis and iterative design and testing should start. Human-Centered Design Process T ask Analysis Product Concept Preliminary Functional Requirements Prototype Design(s) User Review/Testing Finalization of Req’ts/Design Implementation Performance Support Aids Field Tests Final Product FIGURE 1 The human-centered design process. Guerlain Figure 1

OCR for page 87
2 FRONTIERS OF ENGINEERING From a practical perspective, cognitive engineering has almost limitless applications to current and evolving work practices. From a theoretical perspec- tive, much remains to be understood about creating flexible decision-support systems that can not only support a broad range of applications, but also have the automation capabilities to put data together into a way that directly meets the task requirements. REFERENCES Burns, C. 2000. Putting it all together: improving display integration in ecological displays. Human Factors 42(2): 226–241. Card, S.k., J.D. Mackinaly, and B. Shneiderman. 1999. Readings in Information Visualization: Using Vision to Think. San Francisco, Calif.: Morgan kaufmann. Cushing, J., L.D. Janssen, S. Allen, and S. Guerlain. 2006. Overview + Detail in a Tomahawk mission- to-platform assignment tool: applying information visualization in support of an asset allocation planning task. Information Visualization 5: 1–14. Guerlain, S. 2001. Judging pace cross-country: how cognitive task analysis yields insight into an ap- parently simple, yet complex horse-rider activity. Ergonomics in Design 9(3): 13–18. Guerlain, S., G.A. Jamieson, P. Bullemer, and R. Blair. 2002. The MPC Elucidator: a case study in the design for human-automation interaction. IEEE Transactions on Systems, Man, and Cybernetics—Part A: Systems and Humans 32(1): 25–40. Hill, P., and J. kreifeldt. 1979. Toothbrush—US Patent D251,038. Hollnagel, E., G. Mancini, and D.D. Woods. 1986. Intelligent Decision Support in Process Environ- ments. New york: Springer-Verlag. Norman, D.A. 1990. The “Problem” with Automation: Inappropriate Feedback and Interaction, Not “Overautomation.” Pp. 569–576 in Human Factors in Hazardous Situations, edited by D.E. Broadbent, A. Baddeley, and J.J. Reason. Oxford, England: Clarendon Press. Smoot, M., E. Bass, S. Guerlain, and W. Pearson. 2005. A system for visualizing and analyzing near- optimal protein sequence alignments. Information Visualization 4(3): 224–237. White, M., M. Odioso, M. Weaver, M. Purvis, E.J. Bass, and S. Bruce. 2008. Horses? There are horses at Foxfield? An analysis of college student hazardous drinking and related decision making be- haviors. 2008 IEEE Systems and Information Engineering Design Symposium, April 25, 2008, University of Virginia, Charlottesville.