Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 127
Human-System Integration in the System Development Process: A New Look Part II Human-System Integration Methods in System Development
OCR for page 128
Human-System Integration in the System Development Process: A New Look This page intentionally left blank.
OCR for page 129
Human-System Integration in the System Development Process: A New Look The chapters in Part II provide overviews of state-of-the-art methods of human-system integration (HSI) that can be used to inform and guide the design of person-machine systems using the incremental commitment model approach to system development. We have defined three general classes of methods that provide robust representation of multiple HSI concerns and are applicable at varying levels of effort throughout the development life cycle. These broad classes include methods to Define context of use. Methods for analyses that attempt to characterize early opportunities, early requirement and the context of use, including characteristics of users, their tasks, and the broader physical and organizational environment in which they operate, so as to build systems that will effectively meet users’ needs and will function smoothly in the broader physical and organizational context. Define requirements and design solutions. Methods to identify requirements and design alternatives to meet the requirements revealed by prior up-front analysis. Evaluate. Methods to evaluate the adequacy of proposed design solutions and propel further design innovation. Figure II-1 presents a representative sampling of methods that fall into each activity category and the shared representations that are generated by these methods. A number of points are highlighted in the figure: The importance of involving domain practitioners—the individuals who will be using the system to achieve their goals in the target domain—as active partners throughout the design process. The importance of involving multidisciplinary design experts and other stakeholders to ensure that multiple perspectives are considered throughout the system design and evaluation process and that stakeholder commitment is achieved at each step. The availability of a broad range of methods in each class of activity. Appropriate methods can be selected and tailored to meet the specific needs and scope of the system development project. The range of shared representations that can be generated as output of each of four HSI activities. These representations provide shared views that can be inspected and evaluated by the system stakeholders, including domain practitioners, who will be the target users of the system. The shared representations serve as evidence that can be used to inform risk-driven decision points in the incremental commitment development process. We realize that the classification of methods for discussion in the three chapters that follow is to some extent arbitrary, as many of the methods
OCR for page 130
Human-System Integration in the System Development Process: A New Look FIGURE II-1 HSI activities, participants, methods, and shared representations. are applied at several points in the system design process and thus logically could be presented in more than one chapter. The assignment of methods to classes and chapters is based on how the methods are most frequently used and where in the design process they make the greatest contribution. As already noted, the presentation of methods is not exhaustive. We have selected representative methods in each class, as well as some less well-known methods that have been used primarily in the private sector and that we think have applicability to military systems as well. Chapter 1 provides other sources of methods. The committee further recognizes that many of the methods described (e.g., event data analysis methods, user evaluation studies) build on foundational methods derived from the behavioral sciences (e.g., experimental design methodology, survey design methods, psychological scaling techniques, statistics, qualitative research methods). These foundational methods are
OCR for page 131
Human-System Integration in the System Development Process: A New Look not explicitly covered in this report because they are well understood in the field, and textbooks that cover the topics are widely available (e.g., Charlton and O’Brien, 2002; Cook and Campbell, 1979; Coolican, 2004; Fowler, 2002; Yin, 2003). However, two categories of foundational methods that are not explicitly covered but deserve some discussion are briefly described below. Both of these method categories—function allocation and performance measurement—are integral to the application of other methods throughout the design process. Function allocation is the assignment of functions to specific software or hardware modules or to human operators or users. In the case of hardware and software, it is a decision about which functions are sufficiently similar in software requirements or interfunction communication to collect together for implementation. In the case of assignment to human users versus software/hardware, it is a matter of evaluating the performance capacities and limitations of the users, the constraints imposed by the software and hardware, and the system requirements that imply users because of safety or policy implications. Everyone agrees that function allocation is, at the base level, a creative aspect of the overall design process. Everyone agrees that it requires hypothesis generation, evaluation and iteration. In our view, it spans the range of activities that are represented by the methodologies we are describing and does not, by itself, have particular methodologies associated with it. There have been attempts to systematize the process of achieving function allocation (Price, 1985), but in our view they encompass the several parts of the design process that we are discussing in this section and do not add new substantive information. Readers interested in the topic itself are referred to Price (1985) and a special issue on collaboration, cooperation, and conflict in dialogue systems of the International Journal of Human-Computer Studies (2000). Performance measurement supports just about every methodology that is applied to human-system integration. Stakeholders are interested in the quality of performance of the systems under development, and they would like to have predictions of performance before the system is built. While they may be most interested in overall system performance—output per unit time, mean time to failure, probability of successful operation or mission, etc.—during the development itself, there is a need for intermediate measures of the performance of individual elements of the system as well, because diagnosis of the cause of faulty system performance requires more analytic measures at lower functional levels. From a systems engineering point of view, one may consider system-subsystem-module as the analysis breakdown; however, when one is concerned with human-system integration, the focus is on goal-task-subtask as the relevant decomposition of performance, because it is in terms of task performance that measures specifically of human performance are most meaningful and relevant.
OCR for page 132
Human-System Integration in the System Development Process: A New Look TABLE II-1 Types of Performance Measures Types of Performance Measures Potential Uses Integrated system performance measures Output per unit time Mean time to failure Probability of successful operation or mission Is the overall design and implementation successful? System state variables The values of parameters reflecting the various states of the system as a function of time Is the system being controlled appropriately, either by automation or by human controllers? Are safety boundaries being exceeded? Human performance Response time Percent correct/probability of error Time to learn/relearn Measures of remembering Recognition Free recall Is the system design producing the desired human performance? Is training effective/efficient? Is the system requiring unnecessary workload or memory load? Industrial engineering measures Activity analysis—measures reflecting the allocation of time to different tasks Time and motion study—measures describing in detail the literal time taken for each sequential step in a process What are the equipment duty cycles? How are the users distributing their time? What are the most challenging tasks? Measures derived from human physiology Electroencephalographic records Continuous wave analysis Evoked potentials Electro-ocular response Eye movement tracking Eye blink response Pupil size Cardiovascular measures Heart rate/heart rate variability Metabolic levels How is attention being allocated? What information is being sought? How attention absorbing is the task? How stressful is the task? What is the workload? Subjective measures Judges/expert ratings Questionnaire data Interview/protocol analysis What are experts’ opinions of user/system performance? Do the users like the design? How hard are the users working? Do the users have situation awareness? Are the users stressed?
OCR for page 133
Human-System Integration in the System Development Process: A New Look Types of Performance Measures Potential Uses Team measures Time to complete team task Accuracy/quality of team performance Judges/expert ratings of team effectiveness Team process measures of specific behaviors Cognitive measures of knowledge sharing and team situation awareness Are levels of team performance acceptable? How do different design decisions affect team performance? What aspects of team performance are most critical? Table II-1 contains some examples of the kinds of measures that are likely to be of interest. Since each situation is different, the analyst must consider the context of use under which measurement or prediction is to be undertaken, the goals of the measurement, the characteristics of the users who will be tested or about whom performance will be inferred, and the level of detail of analysis required in order to select specific measures to be used.
OCR for page 134
Human-System Integration in the System Development Process: A New Look This page intentionally left blank.
OCR for page 135
Human-System Integration in the System Development Process: A New Look 6 Defining Opportunities and Context of Use In the past when new technologies were introduced, the focus was on what new capabilities the technology might bring to the situation. People then had to find ways to cope with integrating their actions across often disparate systems. Over time, as computational capability has increased and become more flexible, one has seen a shift in focus toward understanding what people need in given situations and then finding ways for technology to support their activities. In other words, people no longer need to adapt to the technology—the technology can be designed to do what people and the situation demand. The challenge is to understand human needs in dynamic contexts and respond with solutions that leverage the best of what technology has to offer and at the same time resonate with people’s natural abilities. The emphasis and risks have switched from the technology to the users. This chapter introduces a range of methods that can be used to gain an understanding of users, their needs and goals, and the broader context in which they operate. The methods provide a rich tool box to support two of the major classes of human-system integration (HSI) activities that feed into the incremental commitment model (ICM): defining opportunities and requirements and defining context of use. They include methods that focus on the capabilities, tasks, and activities of users (e.g., task analysis methods that characterize the tasks to be performed and their sequential flow, cognitive task analysis methods that define the knowledge and mental strategies that underlie task performance), as well as methods that examine the broader physical, social, and organizational context in which individu-
OCR for page 136
OCR for page 137
Human-System Integration in the System Development Process: A New Look FIGURE 6-1 Representative set of methods and sample shared representations for defining opportunities and context of use. address the specific problems facing users and are sensitive to the larger system context. Experience has shown that introduction of new technology does not necessarily guarantee improved human-machine system performance (Woods and Dekker, 2000; Kleiner, Drury, and Palepu, 1998) or the fulfillment of human needs (Muller et al., 1997b; Nardi, 1996; National Research Council, 1997; Rosson and Carroll, 2002; Shneiderman, 2002). Poor use of technology can result in systems that are difficult to learn or use, can create additional workload for system users, or, in the extreme,
OCR for page 178
Human-System Integration in the System Development Process: A New Look alternative that provides output richer than outcome measures, yet it can be less obtrusive and reliant on memory than questionnaires or surveys. Event data analysis is largely a descriptive approach to the analysis and summary of data that take the form of observations or events that occur over time. The EDA approach incorporates a variety of methods for collecting and reducing data. In the context of human-system integration, it is particularly useful for observations collected via instrumentation (e.g., keystrokes, communication logs) over time. Event data analysis is a bottom-up approach, in that the analyst goes from data, to patterns in the data (often sequential patterns), to general descriptions, and ultimately to theory. Event data analysis has much in common with data mining, although not all data used in event data analysis need to be “mined” (e.g., verbal reports); not all data that are mined take the form of events (e.g., document corpora); and not all event data that are mined are immediately useful for human-system integration (e.g., Google’s web crawling to update PageRank2 algorithms). The assumption behind event data analysis is that the descriptions of behavior (i.e., patterns of use, collaborative interactions) that result can inform system design or can be used to evaluate the impact of a new tool or system on human performance. The output can be a shared representation, a description (often graphical) of users’ behavior in context, as well as quantitative indices associated with that description. The richness of the event data affords a deeper look at the behavior behind effective or ineffective human performance and thus is valuable in reducing uncertainty and guiding human-system integration. Event data analysis is useful for deriving summaries of behavior (system, user, or both) in the context of the existing system. This information is useful, for example, in identifying interface bottlenecks, unused functionality, and patterns of expert or novice actions. Shared Representations A set of instrument-collected events typically requires data reduction for meaningful interpretation. Multivariate statistical techniques or sequential data analyses are often applied to these data sets to reduce them to a presumably more meaningful form. Data-reduction methods, such as 2 PageRank is a numerical weighting assigned algorithmically to each element of a set of hyperlinked documents as an indication of its relative importance within the set. The PageRank is typically computed using an analysis of links to and from the documents to calculate document centrality.
OCR for page 179
Human-System Integration in the System Development Process: A New Look FIGURE 6-7 Example of a Pathfinder network (r = infinite; q = 9) based on conditional transition probabilities between events. Bold numbers on nodes indicate event frequencies. Numbers on links indicate transition probabilities between the two events. SOURCE: Cooke, Neville, and Rowe (1996). Used with permission of Lawrence Erlbaum Associates. multidimensional scaling and Pathfinder network scaling,3 generate shared representations. For example, in Figure 6-7 the Pathfinder data-reduction procedure (Schvaneveldt, Durso, and Dearholt, 1989) resulted in a graphical representation of word processing events (i.e., keystrokes or mouse clicks) with the most commonly occurring pairs of sequential events being directly linked (Cooke, Neville, and Rowe, 1996). The Pathfinder method takes a set of distance estimates (in this case, probability of transitioning from one function to another in the keystroke sequence) and connects nodes (computer functions in this case) with direct links if they are on the shortest path between two nodes. This kind of description of keystrokes might reveal commonly used functions, unused functions, and common event sequences that should be taken into account in system design. 3 Pathfinder network scaling is a structural modeling technique using algorithms that take estimates of the proximity between pairs of items as input and define a network representation that preserves the most important links.
OCR for page 180
Human-System Integration in the System Development Process: A New Look TABLE 6-4 Examples of Uses of Event Data Analysis Question Type of Event Data What does the operator do from moment to moment? What options are not used? What options precede the request for help? What action sequences occur often enough to be automated or assisted? Keystrokes, mouse movements, click streams. What are the service demands made on a shared resource (like a server or a database)? What are critical dates or times of day? How can server/database traffic be anticipated or smoothed? Hits on a web site. Database accesses. Server traffic. (While conventional server logs provide a very low-level view of these demands, instrumentation can provide a work-oriented account of server demands.) What are the current issues that the organization is grappling with? What is the organization’s current intellectual capital? User-initiated social-software events and data, like tag creation and tag modification, blog entries, wiki entries, and current searches. What are people thinking and planning as they work? What confuses them? Think-aloud reports. Verbal reports. Paired-user testing. What is the communication network in the organization? Who communicates with whom? Communications events (email, chat, meeting attendance). What is the context of critical events? How often do critical events occur and what events preceded and follow them? Stream of video events (e.g., in an emergency room or air traffic control center). One or more recordings of shared radio frequencies among emergency responders. How do people use the work space? What communication patterns or traffic patterns occur? How can the space be used more effectively or efficiently? Movement in an office space. Uses of Methods In the context of cognitive work analysis, event data analysis can be especially useful for strategies analysis and social, organization, and cooperation analysis. In the context of organizational analysis, it has specific application in the descriptions of behavior. Overall, event data analysis is a useful approach for systematizing observations and as such, is of value for defining the context of use in the early ICM phase of exploration. For
OCR for page 181
Human-System Integration in the System Development Process: A New Look Type of Data Analysis Sample Outcomes Frequency analysis. Lag sequential analysis. ProNet. Usability data. Frequency of actions and action sequences. Specific sequential dependencies. Frequency analysis. Time-series analysis. Critical path analysis. High-frequency and low-frequency service requests. Prediction of server load and potential outages. Redistribution of functionality. Database redesigns. Lexical analysis. Cluster analysis. Social network analysis. Identification of new trends. Intelligence analysis. Organizational models. Protocol analysis. Descriptions of cognitive processes for that individual. Confusing or error-prone aspects of the user experience. Social network analysis, via Pathfinder or UCInet. Network graphs to show frequent communications patterns. Identification of particular communication roles, such as organizer, interorganizational gatekeeper, etc. Video analysis. Exploratory sequential data analysis of video or audio streams. Errors and near-misses and events that are temporally related to them; ethnographic interpretation based on video records. Frequencies. Link analysis. Co-occurrence of individuals in the same space. Overused and underused areas, traffic patterns. Workspace layout. example, event data analysis can contribute to the development of system requirements by describing the current context of use. It also can be used to describe behavior in the context of a new design, thereby pitting old design against new, as might be helpful in the development and operation phases. Event data analysis encompasses a family of methods differing on a variety of dimensions. A sample of possible applications of this approach appears in Table 6-4. Most salient to differentiating these methodologies is
OCR for page 182
Human-System Integration in the System Development Process: A New Look the nature of the data or events that are recorded for analysis. Events are discrete slices that occur within an ongoing stream of behavior. Thus the data have temporal properties that lend themselves to sequential data analysis. Some events that are recorded have been used primarily to understand the behavior of a single user interacting within a larger system. Other events take a broader look at the collaboration among multiple users and nonhuman agents, which also occurs in the context of a larger system. Examples of individually oriented event data analysis include verbal protocol analysis (e.g., Ericsson and Simon, 1984), video analysis (e.g., Bødker, 1996), computer event analysis (e.g., Carreira et al., 2004; Cooke, Neville, and Rowe, 1996; Vortac, Edwards, and Manning, 1994), eye-head movement analysis (e.g., Salvucci and Anderson, 2001), as well as physiological measures (Sarter and Sarter, 2003). Event data analysis applied to collaboration includes communication and interaction analysis (e.g., Bowers et al., 1998; Kiekel, Gorman, and Cooke, 2004; Olson, Herbsleb, and Rueter, 1994; Paley, Linegang, and Morley, 2002). The nature of the events collected dictates the intrusiveness of event data analysis (e.g., verbal think-aloud protocol events are more intrusive than logs of text chat). Note, however, that a procedure that is not behaviorally intrusive, such as passive screen-recording, may nonetheless have significant privacy problems that make it highly invasive of privacy for at least some users (e.g., Tang et al., 2006). The application of event data analysis to collaboration is an interesting and fortuitous application for a number of reasons. Just as thinking aloud and the verbal protocol that results is assumed to reflect cognitive processing at the individual level, event data analysis applied to teams is assumed to reflect cognition at the team level (though some assume that it is the team-level thinking; Gorman, Cooke, and Winner, in press). Indeed, this is the theoretical basis of distributed cognition (Hutchins, 1995) and of the concept of the collective subject in activity theory (Nardi, 1996). However, the beauty of this general approach applied to groups or teams is that the process that one would like to trace is more readily observed at the group level than at the individual level. That is, one cannot observe individual thought processes and so rely on verbal reports as an indirect measure. But groups communicate, interact, and (some would argue) engage in team-level cognitive processing as a matter of course, making communication events, and therefore team-level thinking, readily observable. Although EDA methods such as protocol analysis (Ericsson and Simon, 1984) and video analysis (Bødker, 1996) have been around for some time, advances in computing power have made it possible to automate, speed up, and implement in real time many aspects of event data analysis. With this growth in technology, applications have similarly grown beyond user testing applications to problems in collaborative filtering, adaptive user profiles, marketing, communications analysis, and even intelligence analy-
OCR for page 183
Human-System Integration in the System Development Process: A New Look sis. For instance, tools like recording user input (Kukreja, Stevenson, and Ritter, in press) create logs of user interface behavior, ideally suited for event data analysis. In addition, Web 2.0 and the emerging concept of “attention data” (i.e., where does the user spend time and effort?) promise to create enumerable possibilities for rich yet unobtrusive data collection. The methods and tools associated with event data analysis can be categorized by the methodological step in which each is used. Steps include data collection, data analysis, data representation, and assessment and diagnosis. For some applications it may be sufficient to generate a shared representation, and in others it may be more informative to carry the analysis through to assessment and diagnosis. Each of these steps depends on the intended use or application of event data analysis and is discussed in turn. Data Collection Data collected as events range from verbal reports during thinking aloud and video or computer events to eye movements and other physiological measures. Data can also be collected at the group, team, or organizational level. In the spirit of process tracing, the data are not one-time snapshots of individual or group performance (e.g., response time or accuracy) but are indices of a continuous stream of behavior or interactions. Thus, the data recorded for this purpose can include physical interactions among group members (e.g., movement patterns in an office space), events that occur of a certain type that are relevant to the research question (e.g., meetings, phone calls, solitary work, breaks), events that occur strictly over the Internet (emails, text messaging, chat), discourse (written, oral, or gestural), and other kinds of group-level verbal behaviors, such as storytelling and group narratives. The ultimate success of event data analysis is largely determined by the selection of data to record and the parsing of those data into events. Meaningfulness of the resulting behavioral and collaborative patterns can depend on how data are parsed. Although data collection is relatively straightforward and can be facilitated with tools, decisions about the nature of the data to be collected are not. For example, from whom is data collected? Is it an expert user, a manager, a developer, or a novice? Decisions like these should hinge on the questions that are asked. In addition, these decisions require experienced human intervention and are not well supported by technology. Data Analysis Although the rich data needed for this approach can be gathered relatively easily and unobtrusively, there is a downside: that is, the data are rich
OCR for page 184
Human-System Integration in the System Development Process: A New Look and qualitative and identifying patterns and high-level descriptions of behavior is a challenge, especially if undertaken manually. Data transcription and coding of the type required to get started on communication analysis can take many more hours than the raw data took to collect. Once the data are in a coded form, then an analytic method is applied to explore the data and look for patterns. Thus, when it comes to event data, one chief goal of the data analysis is to reduce the data in a meaningful way. Exploratory sequential data analysis (ESDA; Sanderson and Fisher, 1994) is a general approach to this problem that relies heavily on the use of sequential data analysis methods, such as lag sequential analysis or Markov modeling. Although lag sequential analysis and Markov modeling are foundational tools of human factors, custom tools have also been developed (e.g., MacSHAPA; SHAPA) to facilitate the data analysis process. Recognition of statistical patterns in the data has become easier to automate, relieving the human coder of much of the burden. Other foundational data-reduction methods traditionally applied to similarity or relatedness judgments, rather than event data, have also been applied to help simplify event data analysis. Techniques include multidimensional scaling (Shepard, 1962a, 1962b), cluster analysis (Shepard and Arabie, 1979), and Pathfinder (Schvaneveldt, Durso, and Dearholt, 1989). For example, Pathfinder has been adapted for use with event data (i.e., Cooke, Neville, and Rowe, 1996) as well as for the analysis of communication flow data (Kiekel, Gorman, and Cooke, 2004). Furthermore, event data have also been used to derive social networks, although certainly not the typical approach to social network analysis, which has relied more on human judgments regarding relationships (e.g., Tyler, Wilkinson, and Huberman, 2005). These various analytic methods tend to focus on different aspects of the data and thus serve to reduce the data by highlighting different aspects. Data Representation The descriptive analytic techniques, such as multidimensional scaling or Pathfinder-based communication analysis routines, often return extremely complex, though rich descriptions of behavior. Patterns are not always easy to detect in the output by visual inspection. In the data representation step, the output from the analysis is presented to the analyst as a shared representation and in this regard is meant to facilitate interpretation. Versions of the Pathfinder routine, for example, return linked nodes in a graphical format that can be spatially manipulated by the analyst. The application of visualization techniques and tools for these complex behavioral and interaction patterns is an area that is ripe for further research.
OCR for page 185
Human-System Integration in the System Development Process: A New Look Assessment and Diagnosis The preceding three steps result in a qualitative description of individual or collaborative behavior (sequence of eye movements, frequent chains of mouse clicks, who is talking to whom, how often individuals interact, bottlenecks, isolates, etc.), but up to this point there is no value placed on the description. One could imagine postulating the costs and benefits of obvious characteristics of a description, such as an infrequent action, a bottleneck, or an isolate, and indeed these general evaluative interpretations can be made. Metrics from social network analysis can also be adopted for the purpose of evaluating a procedural network representation. However, making the jump from description to some deeper and contextually meaningful interpretation of the description is the most challenging aspect of this process and the most difficult to automate. One approach is to map (in a very bottom-up way) the descriptions within context onto other criterion measures (e.g., errors, speed, conflict, poor situation awareness, shared mental models). Automation of this process would involve having a machine learn to discriminate behavioral patterns and attach meaning to them. For instance, a series of mouse clicks might be indicative of a specific erroneous mental model. This mental model could then be targeted for intervention. Assessment and diagnosis move event data analysis from its purely descriptive status to serve an additional evaluative function. Contributions to System Design Phases Table 6-5 describes how EDA can be applied across the life-cycle phases. It can be used in exploration to gather information about existing conditions, in advance of engineering a new or enhanced system. For ex- TABLE 6-5 Life-Cycle Phases of the ICM and EDA Phase Method Variation Exploration EDA May help scope problem; can base on expert judgment if no existing system. Valuation EDA Use to describe existing behavior; highlight obvious weaknesses, strengths. Architecting EDA Begin to focus more on future behavioral repertoire; change to existing behavior patterns. Development EDA-E Can collect behavioral data with prototype and evaluate success of new design. Operation EDA-E Given other criterion can collect data from users in beta testing to assess success. NOTE: EDA-E (Evaluative) includes evaluative steps such as assessment and diagnosis.
OCR for page 186
Human-System Integration in the System Development Process: A New Look ample, capturing a series of keystroke-level events can inform the analyst about the order in which operations are actually carried out, the sequence in which systems are accessed, and the communications (people, frequency, media) that are part of current work. This information can be used to find problems, inefficiencies, and opportunities for improvements. These data can also provide early indications of unanticipated usage patterns, which the alert analyst can “harvest” to create best practices or new product or feature proposals. In a more conservative risk-management perspective, these data can help analysts and their teams to solve problems that are actually occurring in real work, rather than expending resources on problems that are less important. EDA can also be applied at the early development stages of human-system integration: valuation and architecting. Event data can be collected prior to a prototype if expert judgments are used in lieu of events. Application in the early stages will reduce risk by providing information about typical user behavior or normative patterns of collaboration. Systems developed with the framework of these behavioral patterns will avoid the risk of systems that are incompatible with the user or collaborative behavior, consequently avoiding costly redesign. Behavioral patterns provide valuable information about ongoing individual behavior and the collaborative process, including strengths, weaknesses, and possible constraints for future design. By relying on expert judgment about behavior through participatory design, for example, rather than actual behavioral observations, these methods also become useful for descriptions of envisioned systems. Application of EDA in the later stages of system testing will draw attention to possible problems and provide guidance for selecting between two or more design alternatives (based on compatibility with human or collaborative behavior). This guidance also reduces the risk of the need for changes even later in system development or the even greater risk of failures of system productivity or safety. The more evaluative information, such as which behavioral repertoire is faster or more efficient or best for situation awareness, is useful in later phases of development, in order to test or compare possible designs. The technique in this instance provides a means of assessing individual performance. EDA can similarly be used to assess collaborative performance—often overlooked in favor of more general outcome or system performance. However, it can also provide a deeper, more explanatory level of analysis, regarding the effects of a design on behavior.
OCR for page 187
Human-System Integration in the System Development Process: A New Look Strengths, Limitations, and Gaps Relative to some of the other methods for human-system integration, EDA has a number of unique advantages and disadvantages that should be considered along with the risk of not using it. One significant advantage is that these data can be collected unobtrusively in the field or operational setting. With more sophisticated tools, much of the processing and analysis of the data can also be done automatically and in real time. This is an advantage because it allows user data to be collected without interrupting users from their routine tasks, consequently avoiding changes in the results due to the interruption and maximizing user time. There are also costs. The definition of data events and the analysis and interpretation of the rich data collected require some expertise and time. This is particularly true for a new task. The costs incurred in these data definition and interpretation activities should decline as analysts gain experience working with a particular domain and task. Another cost of this methodology is the associated ethical and privacy concerns when data collection can occur outside an operator’s awareness. The collection of much of the data described in this section raises new issues in security, privacy, and ultimately ethics. Some organizations provide guidelines or policies in these areas, but even in those cases, there are many questions for which the researcher/practitioner/engineer must take responsibility. Many systems inform the user that her or his data may be used for research purposes. For large-scale systems, users often form a reasonable assumption that their small use of the system will be “under the radar” of any research program. However, contemporary and near-future quantitative techniques address very large data sets and can easily find individual users who match certain search criteria. Thus, no one can be confidently under the radar any longer, but most users are not aware of this change. Finally, there are also risks. One obvious risk is that a focus on recordable, quantifiable data may push other phenomena out of focus. For example, measuring the events that occur during a computerized work flow may distract the team from looking at the noncomputerized work-arounds and fix-ups that may also be taking place. Failing to observe these more qualitative, more difficult-to-record events may lead in turn to several types of errors: (1) problems with the work flow may go undetected if all that is measured is the work flow itself and (2) training or education levels may be underestimated if the more demanding work-around activities are not recorded. Thus, event data analysis is one tool in the tool box and should be used in a balanced way with other, more qualitative tools. A second risk was alluded to earlier in reference to data collection. These techniques will generate results regardless of the quality of the data. Decisions about what data to collect, in what context, for how long, and
OCR for page 188
Human-System Integration in the System Development Process: A New Look from whom are critical and nontrivial. Without experienced decision makers, the analyst is in danger of experiencing the “garbage in–garbage out” dilemma and, depending on familiarity with the domain, may never recognize the limits of the data.
Representative terms from entire chapter: