7
Defining Requirements and Design

Design is fundamentally an innovative process. The methods discussed in this chapter are intended to support identification and exploration of design alternatives to meet the requirements revealed by analyses of opportunity space and context of use. The methods are not a substitute for creativity or inventiveness. Rather they provide a structure and context in which innovation can take place. We begin with a discussion of the need for and the methods used to establish requirements based on the concept of user-centered design. The types of methods included here are work domain analysis, workload assessment, situation awareness assessment, participatory design; contextual design; physical ergonomics; methods for analyzing and mitigating fatigue, and the use of prototyping, scenarios, persona, and models and simulations. As with the descriptions in Chapter 6, each type of method is described in terms of uses, shared representations, contributions to the system design phases, and strengths, limitations, and gaps. These methods are grouped under design because their major contributions are made in the design phase; however, it is important to note that they are also used in defining the context of use and in evaluating design outcomes as part of system operation. Figure 7-1 provides an overview.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 189
Human-System Integration in the System Development Process: A New Look 7 Defining Requirements and Design Design is fundamentally an innovative process. The methods discussed in this chapter are intended to support identification and exploration of design alternatives to meet the requirements revealed by analyses of opportunity space and context of use. The methods are not a substitute for creativity or inventiveness. Rather they provide a structure and context in which innovation can take place. We begin with a discussion of the need for and the methods used to establish requirements based on the concept of user-centered design. The types of methods included here are work domain analysis, workload assessment, situation awareness assessment, participatory design; contextual design; physical ergonomics; methods for analyzing and mitigating fatigue, and the use of prototyping, scenarios, persona, and models and simulations. As with the descriptions in Chapter 6, each type of method is described in terms of uses, shared representations, contributions to the system design phases, and strengths, limitations, and gaps. These methods are grouped under design because their major contributions are made in the design phase; however, it is important to note that they are also used in defining the context of use and in evaluating design outcomes as part of system operation. Figure 7-1 provides an overview.

OCR for page 189
Human-System Integration in the System Development Process: A New Look FIGURE 7-1 Representative methods and sample shared representations for defining requirements and design.

OCR for page 189
Human-System Integration in the System Development Process: A New Look USABILITY REQUIREMENTS Overview Inadequate user requirements are a major contributor to project failure. The most recent CHAOS report by the Standish Group (2006), which analyzes the reasons for technology project failure in the United States, found that only 34 percent of projects were successful; 15 percent completely failed and 51 percent were only partially successful. Five of the eight (highlighted below) most frequently cited causes of failure were poor user requirements: 13.1 percent, incomplete requirements 12.4 percent, lack of user involvement 10.6 percent, inadequate resources 9.9 percent, unrealistic user expectations 9.3 percent, lack of management support 8.7 percent, requirements keep changing 8.1 percent, inadequate planning 7.5 percent, system no longer needed Among the main reasons for poor user requirements are (1) an inadequate understanding of the intended users and the context of use, and (2) vague usability requirements, such as “the system must be intuitive to use.” Figure 7-2 shows how usability requirements relate to other system requirements. Usability requirements can be seen from two perspectives: characteristics designed into the product and the extent to which the product meets user needs (quality in use requirements). There are two types of usability requirements. Usability as a product quality characteristic is primarily concerned with ease of use. ISO/IEC 9126-1 (International Organization for Standardization, 2001) defines usability in terms of understandability, learnability, operability, and attractiveness. There are numerous sources of guidance on designing user interface characteristics that achieve these objectives (see the section on guidelines and style guides under usability evaluation). While designing to conform to guidelines will generally improve an interface, usability guidelines are not sufficiently specific to constitute requirements that can be easily verified. Style guides are more precise and are valuable in achieving consistency across screen designs produced by different developers. A style guide tailored to project needs should form part of the detailed usability requirements. At a more strategic level, usability is the extent to which the product

OCR for page 189
Human-System Integration in the System Development Process: A New Look FIGURE 7-2 Classification of requirements. SOURCE: Adapted from ISO/IEC 25030 (International Organization for Standardization, 2007). meets user needs. ISO 9241-11 (International Organization for Standardization, 1998) defines this as the extent to which a product is effective, efficient, and satisfying in a particular context of use. This high-level requirement is referred to in ISO software quality standards as “quality in use.” It is determined not only by the ease of use, but also by the extent to which the functional properties and other quality characteristics meet user needs in a specific context of use. In these terms, usability requirements are very closely linked to the success of the product. Effectiveness is a measure of how well users can perform the job accurately and completely. Efficiency is a measure of how quickly a user can perform work and is generally measured as task time, which is critical for productivity. Satisfaction is the degree to which users like the product—a subjective response that includes the perceived ease of use and usefulness. Satisfaction is a success factor for any products with discretionary use, and essential to maintain workforce motivation.

OCR for page 189
Human-System Integration in the System Development Process: A New Look Uses of Methods Measures of effectiveness, efficiency, and satisfaction provide a basis for specifying concrete usability requirements. Measure the Usability of an Existing System If in doubt, the figures for an existing comparable system can be used as the minimum requirements for the new system. Evaluate the usability of the current system when carrying out key tasks, to obtain a baseline for the current system. The measures to be taken would typically include success rate (percentage of tasks in which all business objectives are met). mean time taken for each task. mean satisfaction score using a questionnaire. Specify Usability Requirements for the New System Define the requirements for the new system, including the type of users, tasks, and working environment. Use the baseline usability results as a basis for establishing usability requirements. A simple requirement would be that when the same types of users carry out the same tasks, the success rate, task time, and user satisfaction should be at least as good as for the current system. It is useful to establish a range of values, such as the minimum to be achieved, a realistic objective, and the ideal objective (from a business or operational perspective). It may also be appropriate to establish the usability objectives for learnability, for example, the duration of a course (or use of training materials) and the user performance and satisfaction expected both immediately after training and after a designated length of use. It is also important to define any additional requirements for user performance and satisfaction related to users with disabilities (accessibility), critical business functions (safety), and use in different environments (universality). Depending on the development environment, requirements may, for example, either be

OCR for page 189
Human-System Integration in the System Development Process: A New Look iteratively elaborated as more information is obtained from usability activities, such as paper prototyping during development, or agreed by all parties before development commences and subsequently modified only by mutual agreement. Test Whether the Usability Requirements Have Been Achieved Summative methods for measuring quality in use (see Chapter 8) can be used to evaluate whether the usability objectives have been achieved. If any of the measures fall below the minimum acceptable values, the potential risks associated with releasing the system before the usability has been improved should be assessed. The results can be used to prioritize future usability work in subsequent releases. Shared Representations The Common Industry Specification for Usability Requirements (Theofanos, 2006) provides a checklist and a format that can be used initially to support communication between the parties involved to obtain a better understanding of the usability requirements. When the requirements are more completely defined, it can be used as a formal specification of requirements. These requirements can subsequently be tested and verified. The specification is in three parts: The context of use: intended users, their goals and tasks, associated equipment, the physical and social environment in which the product will be used, and examples of scenarios of use. An incomplete understanding of the context of use is a frequent reason for partial or complete failure of a system when implemented. The context of use is composed of the characteristics of the users, their task, and the usage environment. There are several methods that can be used to obtain an adequate understanding of this type of information (see Chapter 6). Usability measures: effectiveness, efficiency, and satisfaction measures for the main scenarios of use with target values when feasible. The test method: the procedure to be used to test whether the usability requirements have been met and the context in which the measurements will be made. This provides a basis for testing and verification. The context of use should always be specified. The importance of specifying criteria for usability measures (and an associated range of acceptable values) will depend on the potential risks and consequences of poor usability.

OCR for page 189
Human-System Integration in the System Development Process: A New Look Communication Among Members of the Development Team This information facilitates communication among the members of the development or supplier organization. It is important that all concerned groups in the supplier organization understand the usability requirements before design begins. Benefits include the following: Reducing risk of product failure. Specifying performance and satisfaction criteria derived from existing or competitor systems greatly reduces the risk of product failure as a result of releasing a product that is inferior to existing or competitor systems. Reducing the development effort. This information provides a mechanism for the various concerned groups in the customer’s organization to consider all of the requirements before design begins and reduces later redesign, recoding, and retesting. Review of the requirements specified can reveal misunderstandings and inconsistencies early in the development cycle, when these issues are easier to correct. Providing a basis for controlling costs. Identifying usability requirements reduces the risk of unplanned rework later in the development process. Tracking evolving requirements by providing a format to document usability requirements. Communication Between Customers and Suppliers A customer organization can specify usability requirements to accurately describe what is needed. In this scenario, the information helps supplier organizations understand what the customer wants and supports the proactive collaboration between a supplier and a customer. Specification of Requirements When the product requirements are a matter for agreement between the supplier and the customer, the customer organization can specify one or more of the following: intended context of use, user performance and satisfaction criteria, and test procedure. The Common Industry Specification for Usability Requirements provides a baseline against which compliance can be measured.

OCR for page 189
Human-System Integration in the System Development Process: A New Look Contributions to System Design Phases Usability requirements should be integrated with other systems engineering activities. For example, the ISO/IEC 15288 standard (International Organization for Standardization, 2002) for system life-cycle processes includes the user-centered activities in the stakeholder requirements definition process as shown in Box 7-1. BOX 7-1 User-Centered Activities for Stakeholder Requirements Identify the individual stakeholders or stakeholder classes who have a legitimate interest in the system throughout its life cycle. Elicit stakeholder requirements. Stakeholder requirements are expressed in terms of the needs, wants, desires, expectations, and perceived constraints of identified stakeholders. Scenarios are used to analyze the operation of the system in its intended environment and to identify requirements that may not have been formally specified by any of the stakeholders, for example, legal, regulatory, and social obligations. The context of use of the system is identified and analyzed. Included in the context analysis are the activities that users perform to achieve system objectives, the relevant characteristics of the end-users of the system (e.g., expected training, degree of fatigue), the physical environment (e.g., available light, temperature) and any equipment to be used (e.g., protective or communication equipment). The social and organizational influences on users that could affect system use or constrain its design are analyzed when applicable. Identify the interaction between users and the system. Usability requirements are determined, establishing, as a minimum, the most effective, efficient, and reliable human performance and human-system interaction. When possible, applicable standards, for example ISO 9241 series, and accepted professional practices are used in order to define (1) physical, mental, and learned capabilities; (2) workplace, environment, and facilities, including other equipment in the context of use; (3) normal, unusual, and emergency conditions; and (4) operator and user recruitment, training, and culture. Establish with stakeholders that their requirements are expressed correctly. Define each function that the system is required to perform and how well the system, including its operators, is required to perform that function. Define technical and quality in use measures that enable the assessment of technical achievement.

OCR for page 189
Human-System Integration in the System Development Process: A New Look Strengths, Limitations, and Gaps Establishing high-level usability requirements that can be tested provides the foundation for a mature approach to managing usability in the development process. But while procedures for establishing these requirements are relatively well established in standards, they are not widely applied or understood, and there is little guidance on how to establish more detailed user interface design requirements. With most emphasis in industry on formative evaluation to improve usability, there is often a reluctance to invest in the summative evaluation in the final development of the project. Formal summative evaluation in terms of established usability criteria is needed to determine valid usability. As much of systems development is carried out on a contractor-supplier basis (even if the supplier is internal to the customer organization), it is for the contractor to judge whether the investment in establishing and validating usability requirements is sufficient to justify the associated risk reduction. Usability requirements can also provide significant benefits in clarifying user needs and providing explicit user-oriented goals for development, even if they cannot be exhaustively validated. If there are major usability problems, even the results from testing three to five participants would be likely to provide advance warning of a potential problem (for example, if none of the participants can complete the tasks, or if task times are twice as long as expected). WORK DOMAIN ANALYSIS Overview Among the questions that arise when facing the design of a new system are the following: What functions will need to be accomplished? What will be automated, and what will be performed by people? If people will be involved, how many people will it take, and what will be their role? What information and controls should be made available, and how should they be presented to enhance performance? What training is required? One approach to answering these questions is to start with a list of the tasks to be accomplished and perform task analyses to identify the sequence of actions entailed, the information and controls required to perform those actions, and the implications for number of people and training required. This approach works well when the tasks to be performed and conditions of use can be easily specified a priori (e.g., automated teller machines). However, in the case of highly complex systems (e.g., a process control

OCR for page 189
Human-System Integration in the System Development Process: A New Look plant, a military command and control system) unanticipated situations and tasks inevitably arise. Work domain analysis techniques have been developed to support analysis and design of these more complex systems, in which all possible tasks and situations cannot be defined a priori. Work domain analysis starts with a functional analysis of the work domain to derive the functions to be performed and the factors that can arise to complicate performance (Woods, 2003). The objective is to produce robust systems that enable humans to effectively operate in a variety of situations—both ones that have been anticipated by system designers and ones that are unforeseen (e.g., safely shutting down a process control plant with an unanticipated malfunction). Work domain analysis methods grew out of an effort to design safer and more reliable nuclear power plants (Rasmussen, 1986; Rasmussen, Pejtersen, and Goodstein, 1994). Analysis of accidents revealed that operators in many cases were faced with situations that were not adequately supported by training, procedures, and displays because they had not been anticipated by the system designers. In those cases, operators had to compensate for information or resources that were inadequate in order to recover and control the system. This led Rasmussen and his colleagues to develop work domain analyses methods to support development of systems that are more resilient in the face of unanticipated situations. A work domain analysis represents the goals, means, and constraints in a domain that define the boundaries within which people must reason and act. This provides the framework for identifying functions to be performed by humans (or machines) and the cognitive activities those entail. Displays can then be created to support those cognitive activities. The objective is to create displays and controls that support flexible adaptation by revealing domain goals, constraints, and affordances (i.e., allowing the users to “see” what needs to be done and what options are available for doing it). A work domain analysis is usually conducted by creating an abstraction hierarchy according to the principles outlined by Rasmussen (1986). A multilevel goal-means representation is generated, with abstract system purposes at the top and concrete physical equipment that provides the specific means for achieving these system goals at the bottom. In many instances, the levels of the model include functional purpose (a description of system purposes); abstract function (a description of first principles and priorities); generalized function (a description of processes); physical function (a description of equipment capabilities); and physical form (a description of physical characteristics, such as size, shape, color, and location). Work domain analyses do not depend on a particular knowledge acquisition method. Any of the knowledge acquisition techniques covered in Chapter 6 can be used to inform a work domain analysis. In turn, the

OCR for page 189
Human-System Integration in the System Development Process: A New Look results of the work domain analysis provide the foundation for further analyses to inform human-system integration. There are a growing number of HSI approaches that are grounded in a work domain analysis. A prominent example is cognitive work analysis (Rasmussen, 1986; Rasmussen et al., 1994; Vicente, 1999) that uses work domain analysis as the foundation for deriving implications for system design and related aspects of human-system integration, including function allocation, display design, team and organization design, and knowledge and skill training requirements. Burns and Hajdukiewicz (2004) provide design principles and examples of creating novel visualizations and support systems based on a work domain analysis. Applied cognitive work analysis provides a step-by-step approach for performing and linking the results of a work domain analysis to the development of visualizations and decision-aiding concepts (Elm et al., 2003). These include using a functional abstraction network to capture domain characteristics that define the problem space confronting domain practitioners. overlaying cognitive work requirements on the functional model as a way of identifying the cognitive demands/tasks/decisions that arise in the domain and require support. identifying information/relationship requirements needed to support the cognitive work identified in the previous step. specifying representation design requirements that define how the information/relationships should be represented to practitioner(s) to most effectively support the cognitive work. developing presentation design concepts that provide physical embodiments of the representations specified in the previous step (e.g., rapid prototypes that embody the display concepts). Each design step produces a design artifact that collectively forms a continuous design thread providing a traceable link from cognitive analysis to design. Work-centered design (Eggleston, 2003; Eggleston et al., 2005) is another example of an HSI approach that relies on a work domain analysis. Key elements of work-centered design include (a) analysis and modeling of the demands of work, (b) design of displays/visualizations that reveal domain constraints and affordances, and (c) use of work-centered evaluations that probe the ability of the resultant design to support work across a representative range of work context and complexities.

OCR for page 189
Human-System Integration in the System Development Process: A New Look There is also a long history of the use of models and simulations in psychology to represent aspects of human behavior or performance (or both). Psychologists use them to summarize what they know and to support theories. Some of these models have been shown to be useful for system design to estimate and predict performance or to derive performance measures indicative of human-system performance. Signal Detection Theory One such mathematical model is signal detection theory, which was originally developed to quantify the detection of signals in noisy radar returns (Peterson et al., 1954). It is applicable to a wide range of human-system decision problems, including medical diagnosis, weather forecasting, prediction of violent behavior, and air traffic control, and it has been shown to be a robust method for modeling these types of problems (Swets et al., 2000). Signal detection theory has been found to be useful because it provides separate measures of the sensitivity of the human-system combination to discriminate signal from noise distributions on one hand and the decision criterion (the location of the threshold at which people or machines respond with a signal-present/signal-absent decision) on the other. The principal value of applying signal detection theory is to develop metrics for human-system performance and to evaluate design trade-offs between detector sensitivity, base rates of the signals of interest, and overall predictive value of the system output. The method is best employed to model effectiveness of discrete decision processes supported by automated systems. It serves to reduce the risk of picking the wrong operating point for a decision process, resulting in too many false alarms or a nonoptimal number of successful detections. Models Derived from Human Cognitive Operations A second, quite different approach is GOMS (Card, Moran, and Newell, 1983). GOMS models represent, for a given task, the user’s Goals, Operators (a keystroke, memory retrieval, or mouse move), Methods (to reach a goal, such as using keystrokes or a menu to open a file), and Selection rules (to choose which method to use). These models can be applied as soon as there is an explicit design for a user interface, and they have been used to predict response times, learning times, workload, and to provide a measure of interface consistency and complexity (i.e., similar tasks should use similar methods and operators). These models are now being more widely applied, and there are tools available to support their use (Kieras, 1998; Nichols and Ritter, 1995; Williams, 2000). They provide a sharable representation of the tasks, how they are performed, and how long each

OCR for page 189
Human-System Integration in the System Development Process: A New Look will take. GOMS models can support user interface hardware and software design in several ways. They can be used to confirm consistency in the interface, that a method is available for each user goal, that there are ways to recover from errors, and that there are fast methods for frequently occurring goals (Chipman and Kieras, 2004). The GOMS series of models had their most notable, documented application to predicting the performance of a new design for a telephone information operator’s workstation in Project Ernestine (Gray, John, and Atwood, 1993). In this case, a variant of GOMS predicted that performance with a new telephone operator workstation design would be so much slower than that of the existing workstation, which would result in an increased operation cost of about $2.5 million per year. The new workstation was actually built and soon abandoned because the predictions were correct. As another example, preliminary studies suggest that a modeling approach could make cell phone menu use more efficient by reducing interaction time by 30 percent (St. Amant, Horton, and Ritter, 2004). If applied across all cell phones, this would save 28 years of user time per day. Gong and Kieras (1994) describe a GOMS analysis that suggested a redesign of a commercial computer-aided design system would lead to a 40-percent reduction in performance time and a 46-percent reduction in learning time. These time savings were later validated with actual users. Thus, simple GOMS models can reduce the risk of subsequent operational inefficiencies quite early in the system development process. Models can also provide quantitative evidence for change—they can be used to reject a design that does not perform well enough. Glen Osga (noted in Chipman and Kieras, 2004, pp. 9-10) did a GOMS analysis of a new launch system for the Tomahawk cruise missile system. The analysis predicted that the launch process with the new system would take too long. This was ignored and the system was built as designed. Indeed, the system failed its acceptance test and had to be redesigned. As Chipman and Kieras note, it was costly to ignore this analysis, which could have led to a better design. Despite their usefulness, GOMS models have not been as widely used by human factors specialists or systems engineers in systems development, particularly in large systems. Although relatively straightforward, they are perceived to be too difficult and time-consuming to apply. Digital Human Physical Simulations A third class of models is anthropometric representations of the size, shape, range of motion, and biomechanics of the human body (see also the section on physical ergonomics). Digital human models have been created to predict how humans will fit into physical workspaces, as in ground,

OCR for page 189
Human-System Integration in the System Development Process: A New Look aircraft, or space vehicles or to assess operations under the constraints of encumbering protective clothing. Representative of these models are commercial offerings, such as Jack (http://www.ugs.com/products/tecnomatix/human_performance/jack/) (Badler, Erignac, and Liu, 2002), Safeworks, (http://www.motionanalysis.com/applications/industrial/virtualdesign/safeworks.html), and Ramsis (http://www.humansolutions.com/automotive_industry/ramsis_community/index_en.php). They are available as computer programs that represent the static physical dimensions of human bodies, and they are increasingly able to represent the dynamics and static stresses for ergonomic analyses (Chaffin, 2004). They are primarily used for checking that the range of motion and accessibility are feasible, consistent with safe ergonomic standards, and efficient. They typically contain an anthropometric database that enables them to perform these evaluations for a range of types and sizes of users. Dynamic anthropometric models are thus routinely used to reduce the risks of creating unusable or unsafe systems. The resulting models and analyses can be shared between designers and across design phases. Having a concrete computer mannequin that confirms the success or failure of accommodation at a workplace is a very useful shared representation. There is beginning to be interest in integrating these models with human behavior representations to integrate the physical and cognitive performance of tasks. MIDAS provided an early demonstration of this concept, and new developments are being introduced regularly (e.g., Carruth and Duffy, 2005). Models that Mimic Human Cognitive and Perceptual-Motor Behavior A fourth class, human performance and information processing models, simulates the sensory, perceptual, cognitive, and motor behavior of a human operator. They are referred to by some as integrated models of cognitive systems and by the military as human behavior representations. They interact with a system or a simulation and represent human behavior in enough detail to execute the required tasks in the simulation as a human would, mimicking the results of a human-in-the-loop simulation without the human. Some of these models are based on ad hoc theories of human performance, such as the semiautonomous forces in simulations, such as the military ModSAF and JSAF. Others are built on cognitive architectures that represent theories of human performance. Examples of cognitive architectures include COGNET/iGEN (Zachary, 2000), created specifically for engineering applications; Soar (Laird, Newell, and Rosenbloom, 1987), an artificial intelligence–based architecture used for modeling learning, interruptability, and problem solving; ACT-R (Anderson et al., 2004), used to model learning, memory effects, and accurate reaction time performance;

OCR for page 189
Human-System Integration in the System Development Process: A New Look EPIC (Kieras, Wood, and Meyer, 1997), used to model the interaction between thinking, perception, and action; and D-OMAR (Deutsch, 1998), used to model teamwork. Available reviews note further examples that have been developed for specific purposes (Morrison, 2003; National Research Council, 1998; Ritter et al., 2003). These human behavior representations are more detailed because they actually mimic the information processing activities that generate behavior. They require a substantial initial investment, and each new application requires additional effort to characterize the task content to be performed. However, once developed, they can be used, modified, and reused throughout the system development life cycle, including to support conceptual design, to evaluate early design prototypes, to exercise system interfaces, and to support the development of operational procedures. They offer the ability to make strong predictions about human behavior. Because they provide not only what the descriptive models provide, but also the details of the information processing, they can be used to support applications in which it is useful to have models stand in for users for such things as systems analyses, or in training games and synthetic environments as colleagues and opponents. Models in this class have been used extensively in research and demonstration, but they have not, as yet, been widely used in system design (Gluck and Pew, 2005). In some cases, models of human performance are represented only implicitly in a design tool that takes account of human performance capacities and limitations in making design recommendations. Automatic web site testing software is an example of this. Guidelines and style guides that suggest good practice in interface design are increasingly being implemented in design tools and guideline testing tools. A review of these types of testing tools shows their ease of use and increasing range (Ivory and Hearst, 2001). For example, “Bobby” (http://www.watchfire.com/products/webxm/bobby.aspx) is one of many tools to test web sites. Bobby notes what parts of a web site are barriers to accessibility by people with disabilities and checks for compliance with existing accessibility guidelines (e.g., from Section 508 of the U.S. Rehabilitation Act). Bobby does this by checking objects on a web page in a recursive manner against these guidelines (e.g., that captions for pictures are also provided to support blind users, that fonts are large enough). While the developers of these systems may not have thought specifically about developing a model of the user, the guidelines and tools make assumptions about users. For example, Bobby makes assumptions about the text-to-speech software used by blind users, as well as about the range of visual acuity of sighted users. The implementation often hides the details of these models, creating human performance models that are implicit with the shared representation being only the results of the test, not the assumptions

OCR for page 189
Human-System Integration in the System Development Process: A New Look supporting the test. On one hand, to their credit, these tools represent methods of incorporating consideration of human characteristics into designs that are very easy to use. On the other hand, just as with using statistics programs without understanding the computations they implement, using these tools without understanding the limitations of their implicit user models and performance specifications creates risks of inappropriate application or overreliance on the results. Contributions to System Design Phases Human-system simulation can play an important role in system design across the development life cycle to reduce the development risk. Human-in-the-loop simulation is widely accepted and has been applied successfully in all of the life-cycle phases discussed below. In this section, we focus on applications of human-system modeling because this kind of modeling has been less widely applied and has the potential to make significant contributions. In research labs routinely and increasingly in applied settings, the use of explicit computer models representing human performance has been demonstrated for a variety of uses, including testing prototypes of systems and their interfaces; testing full interfaces to predict usage time and errors; providing surrogate users to act as colleagues in teamwork situations; and validating interfaces as meeting a standard for operator performance time, workload, and error probability. They can also be used to evaluate the ability to meet user requirements and the interface consistency in a common system or a system of systems. Further reviews on models in system design are available (e.g., Beevis, 1999; Howes, 1995; National Research Council, 1998; Vicente, 1999). Exploration and Valuation Human-system models can be useful in exploratory design, because they can range from back-of-the-envelope calculations to formal models that reflect, at a detailed level, the costs and benefits of alternative approaches to a new or revised system. If one is working in air traffic control, for example, models of traffic flow in the U.S. airspace could be modified to postulate the impact of introducing alternative forms of automation. Analysis and network models will be particularly helpful in this stage because they are more flexible and can be performed earlier in the design process. In many cases, the model’s impact in the elaboration phase may be derived from design lessons learned from previous designs—they will help the designer choose better designs in what can be a very volatile design period. An important contribution of a model, especially in the early development stages, is that the model’s development forces the analyst to think very deeply and concretely about the human performance requirements,

OCR for page 189
Human-System Integration in the System Development Process: A New Look about the user-system interactions, and about the assumptions that must be made for a particular design to be successful. For example, a network model can help make explicit the tasks that must be supported, providing a way for development teams to see the breadth of applicability and potential requirements of a system. Architecting and Design During the system’s construction period, models help describe and show the critical features of human performance in the system. A human-system performance model can serve as a shared representation that supports envisioning the HSI implications of a design. As such, they can help guide design, suggesting and documenting potential improvements. Most model types can be used to predict a variety of user performance measures with a proposed system. These measures, including the time to use, time to learn, potential error types, and predicted error frequency, can provide predicted usability measures before the system is built. The models do not themselves tell how to change the system, but they enable alternative designs to be compared. As designers incorporate the implications of a representation in their own thinking, the models also suggest ways to improve the design. In addition, experience with models reflecting multiple design alternative provides a powerful way to help designers understand how the capacities and limitations of their users constrain system performance. Booher and Minninger (2003) provide numerous examples in which redesign was performed, sometimes with initial reluctance but with long-term payoff based on model-based evaluations at this and later stages of design. In a previous section, the usefulness of prototypes was highlighted. Prototypes can be represented at many different levels of specificity. When the design has progressed to the point at which concrete prototype simulations can be developed, it can be very useful to exercise the simulation with a human behavior representation. The development of the human behavior representation itself will be illuminating because it will make the tasks and human performance concrete, but it will also be useful for exploring alternative operational concepts, refining the procedures of use, and identifying the user interface requirements. Again, the human-system simulation can serve as a very useful shared representation that brings the development team together. Evaluation Models can be very helpful in evaluating prototype system and user-interface designs. That is, using a model of the user to evaluate how the interface presents information or provides functionality.

OCR for page 189
Human-System Integration in the System Development Process: A New Look Refining and testing offer perhaps the canonical application of user models in system design. The same or refined versions of models applied earlier in the design process can be reused to support system evaluation. A human model can exercise the interface and compute a variety of usability and system performance measures. While the system is still evolving, evaluation is formative—that is, supporting refinement and improvement. In the later stages of test and evaluation, the evaluation is summative, providing estimates of how the system will perform in the field. Many examples of refining systems using models are now are available (Booher and Minninger, 2003; Kieras, 2003; St. Amant, Freed, and Ritter, 2005). Also, all types of models have been used to help create system documentation or for developing training materials. As the model specifies what knowledge is required to perform a task, the model’s knowledge can also serve as a set of information to include in training and operations documentation, either as a manual or within a help system. Operation The designs of a complex system are never complete because they continually evolve. Human-system simulations can continue to be applied to guide the evolution as experience is gained from the system in the field. Potential changes can be tried out in the simulated world and compared with the existing performance. This has frequently been done in the space program, in which engineers on the ground try out solutions with simulation to find the best one to communicate to the actual flight crew. It should be noted that simulations are less successful as complexity grows and for dealing with conditions such as boundary conditions and anomalies. Strengths, Limitations, and Gaps Strengths Simulations, particularly human-in-the-loop simulations, and human-system models are especially valuable because they make concrete, explicit, and quantitative the role of users in task execution and their impact on the characteristics of the systems to be controlled. They provide concrete examples of how a system will operate, not only how the equipment will operate, but also what human-system performance will result. Another aspect of the use of models and simulations in design is the cumulative learning that occurs in the designer as a result of a simulation-based design and evaluation process. When using a model or simulation to design an interface, the designer receives feedback about users, their behavior, and how they interact with systems. In their next design task, if the feedback was explicit

OCR for page 189
Human-System Integration in the System Development Process: A New Look and heeded, designers have a richer model of the user and of the system, their joint behavior, and the roles users play. Having the knowledge in the designer’s head supports the creative process and makes the knowledge easier to apply than through an external tool. Limitations Ease of use. If the models are more challenging and costly in time and effort than practitioners are willing to use, then one cannot expect them to be used to reduce risk during development. Full-mission human-in-the-loop simulation is costly and time-consuming to apply and should be used only when the potential risks and opportunities justify it. Part-task simulation is a less costly alternative in which only the elements that bear critically on the questions to be answered are simulated. Human-system models range widely in their scope and the effort required to apply them. While the keystroke-level model version of GOMS can be taught fairly quickly, other modeling approaches all appear to be more difficult to use than they should be and more difficult than practitioners currently are willing to use routinely. Even IMPRINT, a well-developed and popular collection of models, is considered too difficult for the average practitioner to use. This may be inherent in the tools; it may be due to inadequate instructional materials or to inadequacies in the quality of the tools and environments to support model development and use. It may also result from the lack of education or experience about how valuable the investment in models can be—that the investment is worth the cost in time and effort. Few people now note how expensive it is to design and test a computer chip, a bridge, or a ship or bemoan the knowledge required to perform these tasks. And yet humans and their interactions are even more complex; designing for and with them requires expertise, time, and support. Further work is needed to improve the usability of the model development process and the ease of use of the resulting models. In order for human-system models to be credible as shared representations, they must make their characteristics and predictions explicit in a way that can be understood by the range of stakeholders for whom they are relevant. There is a range of questions that people ask about models including what their structure is, how they “work,” and why they did or did not take a particular action (Councill, Haynes, and Ritter, 2003). This problem is more acute for the more complex models, particularly the information-processing models. Unclear or obtuse models risk not being used or being ignored if they are not understood. Promoting the understanding of models will increase trust in understanding where the system risks are. Future models will need to support explanations of their structure, predictions, and the source of the predictions.

OCR for page 189
Human-System Integration in the System Development Process: A New Look How models are developed will be important to how models will be used in system design. Using models across the design process from initial conception to test and evaluation will require adapting the level of depth and completeness to each task. Right now, model developers are at times still struggling with building user models once, let alone for reuse and across designers and across design tasks. There have been several efforts to make models more easily used. For human behavior representations, these include Amadeus (e.g., Young, Green, and Simon, 1989), Apex (Freed et al., 2003), CogTool (John et al., 2004), Herbal (Cohen, Ritter, and Haynes, 2005), and G2A (St. Amant, Freed, and Ritter, 2005). At their best, these tools have offered, in limited cases, a 3 to 100 times reduction in development time, demonstrating that progress can be made in ease of use. While promising, these tools are not yet complete enough to support a wide range of design or a wide range of interfaces, tasks, and analyses. For example, CogTool is useful and supports a full cycle of model, test, revise interface. It cannot model problem solving or real-time interactive behavior, but it starts to illustrate what such an easy-to-use system would look like. Research programs have been sponsored by the U.K. Ministry of Defence (“Reducing the Cost of Acquiring Behaviours”) and by the U.S. Office of Naval Research (“Affordable Human Behavior Modeling”) to make models more affordable and are sources of further examples and systems in this area. Integration. There are gaps in integrating user models within and across design phases as well as connecting them to the systems themselves. As models get used in more steps in the design process, they will serve as boundary objects, representing shared understanding about users’ performance in the systems under evaluation—their goals, their capabilities to execute tasks, and their behavior. IMPRINT has often been used this way. Once widely used, there will be a need to integrate models to ensure that designers and workers at each stage are talking about the same user-system characteristics. The models might usefully be elaborated together, for example, starting with a GOMS model and moving to a human behavior representation model to exercise an interface. This kind of graceful elaboration has been started by several groups (Lebiere et al., 2002; Ritter et al., 2005, 2006; Urbas and Leuchter, 2005) but is certainly not yet routine. The models will also have to be more mutable so that multiple views of their performance can be used by participants in different stages of the design process. Some designers will need high-level views and summaries of behavior and the knowledge required by users to perform the task, and other designers may need detailed time predictions and how these can be improved.

OCR for page 189
Human-System Integration in the System Development Process: A New Look It is especially valuable for models of users to interact with systems and their interfaces. Models that interact with systems are easiest for designers to apply, most general, and easiest to validate. Eventually it could allow models’ performance to serve as acceptance tests, and it may lead to new approaches, such as visual inspection of operational mock-ups rather than extensive testing. Currently, connecting models to system or interface simulations is not routine. The military has shown that the high-level architecture connection approach can be successful when the software supporting the models and systems to be connected is open and available for inspection and modification. However, much commercial software is proprietary and not available for modification to support model interaction (Ritter et al., 2000). In the long term, we think that the approach of having models of human behavior representations interacting directly with an unmodified or instrumented interface software will become the dominant design approach, which can also include automatic testing with explicit models. Models that use SegMan represent steps toward this approach of automatic testing of unmodified interfaces (Ritter et al., 2006; St. Amant, Horton, and Ritter, 2004). High-level languages. Currently, many models, particularly human behavior representation models, require detailed specifications. Creating these models for realistic tasks can be daunting. For example, there are at least 95 tasks to include in a university department web site design (Ritter, Freed, and Haskett, 2005). One way to reduce the risk that human behavior models will be unused is to provide a high-level language that is similar to that used in network models. Interface designers will need a textual or graphical language to create models that are higher level than most of the current human behavior representation languages, and analysts will need libraries of tasks (or the ability to read models as if they were libraries), and they will need to be able to make it easy to program new and modified tasks. More complete lists of requirements for this approach are available (e.g., Kieras et al., 1995; Ritter, Van Rooy, and St. Amant, 2002). Cultural, team, and emotional models. Models of individual task performance have rarely included social knowledge about working in groups or cultural differences. Users are increasingly affected by social processes, including culture and emotion. As one better understands the role of these effects on systems, models will need to be extended to include what is known about these characteristics as a further element of risk reduction. For a mundane but sometimes catastrophic example, consider the interpretation of switches in different cultures. Some cultures flip switches up to be on, and some switch them down. The design, and implementation,

OCR for page 189
Human-System Integration in the System Development Process: A New Look of safety-critical switches, such as aircraft circuit breakers or power plant controls, needs to take account of these cultural differences. Social knowledge, cultural knowledge, theories of emotions, and task knowledge have been developed by different communities: models of social processes will need to be adapted if they are to be incorporated in models of task execution (like human behavior representation models, Hudlicka, 2002). Understanding and applying this knowledge to design is of increasing interest as a result of a desire to improve the quality of models performance and an acknowledgment that cultural, team, and emotional effects influence each other and task performance. For example, there is a forthcoming National Academies study on organizational models (National Research Council, 2007) and there is also recent work on including social knowledge in models of human behavior representation (e.g., Sun, 2006).