Trends In Human-Computer Interaction Research And Development
Virginia Polytechnic Institute
Human-computer interaction (HCI) is a field of research and development, methodology, theory, and practice with the objective of designing, constructing, and evaluating computer-based interactive systems-including hardware, software, input/output devices, displays, training, and documentation-so that people can use them efficiently, effectively, safely, and with satisfaction. HCI is cross-disciplinary in its conduct and multidisciplinary in its roots, drawing on-synthesizing and adapting from-several other fields, including human factors (e.g., the roots for task analysis and designing for human error in HCI); ergonomics (e.g., the roots for design of devices, workstations, and work environments); cognitive psychology (e.g., the roots for user modeling); behavioral psychology and psychometrics (e.g., the roots of user performance metrics); systems engineering (e.g., the roots for much predesign analysis); and computer science (e.g., the roots for graphical interfaces, software tools, and issues of software architecture).
The entire field of HCI shares the single goal of achieving high usability for users of computer-based systems. Rather than being fuzzy and vague as it is sometimes perceived, usability is tangible and can be quantified. Usability can be broadly defined as ''ease of use," including such measurable attributes as learnability, speed of user task performance, user error rates, and subjective user satisfaction (Shneiderman, 1992; Hix and Hartson, 1993a). However, an easy-to-use system that does not support its users' needs, in terms of functionality, is of little value. Thus, usability has evolved toward the concept of "usability in the large"-that is, ease of use plus usefulness.
Despite many research advances in interactive computer systems, usability barriers still obstruct access to, and blunt effectiveness of, an every-citizen interface for the national information infrastructure-disenfranchising and disenchanting users across society. As a result, the United States fails to accrue the potentially enormous returns of our collective investment in computing technology. These barriers impede human productivity and have a profound impact on computer users in business, government, industry, education, and indeed the whole nation.
In the not-too-distant past, computer usage was esoteric, conducted mostly by a core of technically oriented users who were not only willing to accept the challenge of overcoming poor usability but also sometimes welcomed it as a barrier to protect their craft from uninitiated "outsiders." Poor usability was good for the field's mystique, not to mention users' job security. Now, unprecedented numbers of Americans use computers, and user interface is often the first thing people ask about when discussing software. To most users the interface is the system. For the "every citizen" of today, communication with the system has become at least as important as computation by the system.
The goals of most organizations include increased employee and organization productivity, decreased employee training costs, decreased employee work errors, and increased employee satisfaction. These are also exactly the benefits of achieving high usability in user interfaces. Too often, especially in government and large businesses, training is used as a costly substitute for usability, and almost as often it fails to meet its goals. Attention to usability by developers no longer requires justification in most quarters: "usability has become a competitive necessity for the commercial success of software" (Butler, 1996).
Achieving good usability requires attention to both product and process. The product, in this case, is the content of the user interaction design and its embodiment in software. An effective process for developing interaction design is also important, and a poor understanding of the process is often responsible for a product's lack of usability. While state-of-the-art user interaction development processes are based on formative usability evaluation in an iterative cycle, much of the state of the practice is fundamentally flawed in that remarkably little formal usability evaluation is performed on most interactive systems. This is generally changing now in many industrial settings. However, ensuring usability remains difficult when evaluation, because of real or perceived costs, is not standard practice in interactive software development projects.
Developers attempting to incorporate usability methods into their development environments often refer to their efforts in terms of "evaluating software" or "evaluating user interface software." There are many reasons for evaluating software, but usability is not one of them. Usability is seated within the design of the user interaction component of an interactive system, not in the user interface software component, as shown simplistically below:
Development of the user interface
Development of user interaction component
Development of user interface software component
Development of the interaction component, toward which most HCI effort is directed, is substantially different from development of the user interface software. The view of the user interaction component is the user's perspective of user interaction: how it works; how tasks are performed using it; and its look and feel and behavior in response to what a user sees, hears, and does while interacting with the computer.
In contrast, the user interface software component is the programming code by which the interaction component is implemented. The user interaction component design should serve as requirements for the user interface software component. Design of the user interaction component must be given attention at least equal to that given the user interface software component during the development process, if usability in interactive systems is to be ensured.
The overview of HCI topics, issues, and activities that follows is loosely divided into theory, interaction techniques, and development methods. Reflecting its diverse roots, HCI is host to activities in many topical areas, some of which are reviewed here. An attempt has been made to capture a broad, inclusive cross section of a very dynamic field, but this paper is not intended to be an exhaustive survey, and no claims are made for completeness. Emphasis is given to topics of most importance to the usability of an every-citizen interface.
HCI theory has its avid proponents. If the proportion of literature devoted to theory is to be taken as an indication, theory plays a strong
role in HCI, but in fact theory has not seen broad, direct application in the practice of HCI.
Much theory comes to HCI from cognitive psychology (Hammond et. al., 1987; Barnard, 1993). Norman's (1986) theory of action expresses, from a cognitive engineering perspective, human task performance-the path from goals to intentions to actions (inputs to the computer) back to perception and interpretation of feedback to evaluation of whether the intentions and goals were approached or met. The study of learning in HCI (Carroll, 1984; Draper and Barton, 1993) and Fitts Law (relating cursor travel time to distance and size of target) (MacKenzie, 1992) also have their roots in cognitive theory.
To design a user interface (or any system) to meet the needs of its users, developers must understand what tasks users will use a system for and how those tasks will be performed (Diaper, 1989). Because tasks at all but the highest levels of abstraction involve manipulation of user interface objects (e.g., icons, menus, buttons, dialogue boxes), tasks and objects must be considered together in design (Carroll et al., 1991). A complete description of tasks in the context of their objects is a rather complete representation of an interaction design. The process of describing tasks (how users do things) and their relationships (usually in a hierarchical structure of tasks and subtasks) is called task analysis and comes to HCI primarily from human factors (Meister, 1985). There are various task analysis methods to address various purposes. In HCI the primary uses are to drive design and to build predictive models of user task performance. Because designing for usability means understanding user tasks, task analysis is essential for good design; unfortunately, it is often ignored or given only minimal attention.
A significant legacy from cognitive psychology is the model of a human as a cognitive information processor (Card et al., 1983). The Command Language Grammar (Moran, 1981) and the keystroke model (Card and Moran, 1980), which attempt to explain the nature and structure of human-computer interaction, led directly to the Goals, Operators, Methods, and Selection (GOMS) model (Card et al., 1983). GOMS-related models-quantitative models combining task analysis and the human user as an information processor-are concerned with predicting various measures of user performance, most commonly task completion time based on physical and cognitive actions of users, with place holders and estimated times
for highly complex cognitive actions and tasks. Direct derivatives of GOMS include NGOMSL (Kieras, 1988) and Cognitive Complexity Theory (CCT) (Kieras and Polson, 1985; Lewis et al., 1990), the latter of which is intended to represent the complexity of user interaction from the user's perspective. This technique represents an interface as the mapping between the user's job-task environment and the interaction device behavior.
GOMS-related techniques have been shown to be useful in discovering certain kinds of usability problems early in the life cycle, even before a prototype has been constructed. Some studies (e.g., Gray et al., 1990) have demonstrated a payoff in a few circumscribed applications where the savings of a small number of user actions (e.g., a few keystrokes or mouse movements) can improve user performance enough to have an economic impact, often because of the repetitiveness of a task.
Nonetheless, these models have not achieved widespread application within the tight constraints of industrial schedules and budgets because of the labor intensiveness of producing and maintaining these relatively formal and structured task representations, the need for specialized skills, and the difficulty in competing with the effectiveness of usability evaluation using a prototype. Furthermore, these techniques generally do not take into account individual differences in user classes and are often limited to expert, error-free behaviors (not representative of "every citizen" as a user). In any case, it is generally agreed that this kind of analytical approach to usability evaluation cannot be considered a substitute for empirical formative evaluation-usability testing of a prototype with users in a lab or field setting (see "User-Based Evaluation" below).
Another area feeding HCI theory and practice is "work activity theory" (Ehn, 1990; Bodker, 1991). Originating in Russia and Germany and now flourishing in Scandinavia (where it is, interestingly, related to the labor movement), this view of design based on work practices situated in a worker's own complete environment has been synthesized into several related mainstream HCI topics. For example, "participatory design" is a democratic process based on the argument that users should be involved in designs they will be using, in which all stakeholders, including and especially users, have equal inputs into interaction design. Muller (1991) and others have operationalized participatory design in an approach called PICTIVE, which supports rapid group prototype design using Post-It(tm) notes, marking pens, paper, and other "low-technology" materials on a large table top.
This interest in design driven by work practices in context has led to the eclectic inclusion in some HCI practice of ethnography, an investigative
field rooted in anthropology (LeCompte and Preissle, 1993), and other hermeneutic (concerned with ways to explain, translate, and interpret perceived reality) approaches as qualitative research tools for extracting design requirements. Contextual inquiry / design (Wixon et al., 1990) is an example of an adaptation of this kind of approach, where design and evaluation are conducted collaboratively by users and developers, while users perform normal work tasks in their natural work environment. Much of this collaboration is based on interviews that seek to make implicit work practices more explicit and to draw out structure, language, and culture affecting the work.
The task artifact framework of Carroll and Rosson (1992) and, to some extent, scenario-based design follow an ethnographic focus on task performance in a work context. Scenarios are concrete, narrative descriptions of user and system activity for task performance (Carroll, 1995). They describe particular interactions happening over time, being deliberately informal, open ended, and fragmentary. Scenarios often focus on interaction objects, or artifacts, and how they are manipulated by users in the course of task performance.
While not theory per se, formal methods have been the object of some interest and attention in HCI (Harrison and Thimbleby, 1990). The objectives of formal methods-precise, well-defined notations and mathematical models-in HCI are similar to those in software engineering. Formal design specifications can be reasoned about and analyzed for various properties such as correctness and consistency. Formal specifications also have the potential to be translated automatically into prototypes or software implementation. Thus, in principle, formal methods can be used to support both theory and practice; however, they have not yet had an impact in real-world system development, and their potential is difficult to predict.
Devices, Interaction Techniques, And Graphics
In contrast to theory, the influence of interaction devices and their associated interaction techniques represents a practical arena of real-world constraints as well as hardware design challenges. "An interaction technique is a way of using a physical input/output device to perform a generic task in a human-computer dialogue" (Foley et al., 1990). A very similar term, interaction style, has evolved to denote the behavior of a user and an interaction object (e.g., a push button or pulldown menu) within the context of task performance. In practice, the notion of an interaction
technique includes the concept of interaction style plus full consideration of internal machine behavior and software aspects. In the context of an interaction technique, an interaction object (and its supporting software) is often referred to as a "widget." Libraries of widgets-software that supports programming of graphical user interfaces (GUIs)-are an outgrowth of operating system device handler routines used to process user input and output in the new ancient and impoverished interaction style of line-oriented, character-cell, text-only, "glass teletype" terminal interaction. At first, graphics packages took interaction beyond text to direct manipulation of graphical objects, eventually leading to new concepts in displays and cursor tracking. Of course, invention of the mouse and advent of the Xerox Star and Lisa Macintosh by Apple accelerated the evolution of the now-familiar point-and-click interaction styles. It is not surprising that many of the computer scientists who developed early graphics packages also introduced GUI interaction techniques as part of their contribution to the HCI field (Foley and Wallace, 1974; Foley et al., 1990). To some extent, standardization of interactive graphical interaction techniques led to the widgets of today's GUI platforms and corresponding style guides intended for ensuring compliance to a style but sometimes mistakenly thought of as usability guides.
This growth of graphics and devices made possible one of the major breakthroughs in interaction styles-direct manipulation (Shneiderman, 1983; Hutchins et al., 1986; Weller and Hartson, 1992)-changing the basic paradigm of interaction with computers. Unlike previous command-line-oriented interaction in which users plan tasks in terms of hierarchies of goals and subgoals, entering a command line for each, direct manipulation allows opportunistic and incremental task planning. Users can try something and see what happens, exploring many avenues for interactive problem solving. This kind of opportunistic interaction is also called display-based interaction (Payne, 1991).
Development Methods And Software Engineering
The difference between user interaction and user interface software, mentioned in the Introduction, results in a need for separate and fundamentally different development processes for the two components of a user interface.
Studies deriving principles for user interaction development (e.g., Gould et al., 1991) vary, but all agree that interaction development must
involve usability evaluation. Just adding some kind of "user testing" to an existing software process is not enough, however. Usability comes from a complete process, one that ensures usability and attests to when it has been achieved (Hix and Hartson, 1993a). Most researchers and practitioners also agree that an interaction development process must be iterative, unlike the phase-oriented "waterfall" method, for example, for software development. Although software can be correctness driven, user interaction design-because of infinite design possibilities and unpredictable, dynamic, and psychological aspects of the human user-must be self-correcting. Thus, interaction development is an essentially iterative process of design and evaluation, one that must, in the end, be integrated with other system and software life cycles. Within this cycle, the interaction design is an iteratively evolving design specification for the user interface software. The star life cycle (Hartson and Hix, 1989) for interaction development explicitly acknowledges these differences from software development, being unequivocally iterative, and allows the process to start with essentially any development activity and proceed to any other activity before the previous one is completed, with each activity informing the others.
Design is closely coupled to, and driven by, early systems analysis activities such as needs, task, and functional analyses. Good interaction design involves early and continual involvement of representative users and is guided by well-established design guidelines and principles built on the concept of user-centered design (Norman and Draper, 1986). Design guidelines address such issues as consistency, use of real-world metaphors, human memory limits, screen layout, and designing for user errors. Additionally, designers are expected to follow style guides (less oriented toward usability than toward compliance with some "standard" style) in their use of widgets.
Although some more recent guidelines enjoy the support of empirical studies, guidelines have typically been scattered throughout the literature, based mostly on experience and educated opinion. In a classic work, Smith and Mosier (1986) compiled guidelines for character-cell, textual interface design. Others (Mayhew, 1992; Shneiderman, 1992) have followed to help cover graphical interfaces.
Many practitioners believe it is enough to know and use interface design guidelines, possibly in addition to an interface style guide (e.g., for Windows). Experience, however, has shown that guidelines and style
guides do not eliminate the need for usability evaluation. Experience has also demonstrated that, although guidelines are not difficult to learn as factual knowledge, their effective application in real design situations is a skill acquired only through long experience.
The creative act of design must also be accompanied by the physical act of capturing and documenting that design. Although many constructional techniques exist for representing software aspects of interface objects, behavioral representation techniques are needed for communicating, among developers, the interaction design from a behavioral task and user perspective. The User Action Notation (UAN) is one such technique (Hartson et al., 1990; Hartson and Gray, 1992). The UAN is a user- and task-oriented notation that describes the behavior of a user and an interface during their cooperative performance of a task. The primary abstraction of the UAN is a user task-a user action or group of temporally related user actions performed to achieve a work goal. A user interaction design is represented as a quasi-hierarchical structure of asynchronous tasks. User actions, interface feedback, and internal state information are represented at various levels of abstraction in the UAN. In addition to design representation, design rationale (MacLean et al., 1991) is captured to record and communicate the history and basis for design decisions, to reason about designs, and to explore alternatives.
Rapid prototypes of interaction design are early and inexpensive vehicles for evaluation that can be used to identify usability problems in an interaction design before resources are committed to implementing the design in software. Much interest has been focused on low-fidelity prototypes (e.g., paper and pencil). Counter to intuition, low-fidelity prototypes have allowed developers to discover as many usability problems as found using interactive computer-based prototypes (Virzi et al., 1996). Paper prototypes are most useful early in the life cycle because they are more flexible in exploring variations of interaction behavior at a cost of less fidelity in appearance. Later in the life cycle, changes made to the behavior of a coded prototype are more expensive than changes made in appearance. Almost all projects eventually move to computer-based rapid prototypes for formal usability evaluation.
Summative evaluation is used to make judgments about a finished product, to gauge the level of usability achieved and possibly compare one system with another. In contrast, formative evaluation-the heart of the
star life cycle-is used to detect and fix usability problems before the interaction design is coded in software (Hix and Hartson, 1993a,b; Nielsen, 1993), aiding in the improvement of an interaction design while a product is still being developed. For formative evaluation, unlike summative evaluation, statistical significance is not an issue. Formative evaluation relies on both quantitative and qualitative data. The quantitative data are used as a gauge for the process-to be sure usability is improving with each design iteration and to know when to stop iterating. Borrowing an adage from software engineering (and probably other places before that), "if you can't measure it, you can't manage it." The instruments used to quantify usability include benchmark tasks and user questionnaires. Benchmark tasks, drawn from representative and mission-critical tasks, yield objective user performance data, such as time on task and error rates (Whiteside et al., 1988). Questionnaires yield subjective data such as user satisfaction (Chin et al., 1988). In analyzing quantitative data, results are compared against preestablished usability specifications (Whiteside et al., 1988)-operationally defined and measurable goals used as criteria for success in interaction design.
Even more valuable than these quantitative data are the qualitative data gathered in usability evaluations. Identification of critical incidents-occurrences in task performance that indicate a usability problem-are essential in pinpointing design problems. Verbal protocol (capturing users' thinking aloud) helps designers understand what was going through a user's mind when a usability problem occurred, which may help in ascertaining its causes and in offering useful solutions.
These quantitative and qualitative data typically come from lab-based evaluations involving users as "subjects." While very effective, this process can be expensive. The need for faster, less costly usability methods has led to approaches, such as discount usability engineering (Nielsen, 1989), that trade off less-than-perfect and complete results for a lower cost. Inspection methods (Nielsen and Mack, 1994) use systematic examinations of design representations, prototypes, or software products. Cognitive walkthroughs (Lewis et al., 1990; Wharton et al., 1992) and claims analysis (Carroll and Rosson, 1992) are effective inspection methods, especially early in development, but can still be labor intensive and require special training, which is intimidating to developers in search of cost-effective methods. Heuristic evaluation (Nielsen and Molich, 1990; Nielsen, 1992), which involves reviewing compliance of an interaction design to a checklist of selected and generalized guidelines, is an even less expensive inspection method but is limited by the scope of guidelines used.
Inspection methods are effective at finding some kinds of usability problems but do not reliably pinpoint all types of problems that can be
observed in lab-based testing. In fact, lab-based usability evaluation remains the yardstick against which most new methods are compared in formal studies. Most real-world development organizations continue to be willing to pay the price for extensive lab-based usability evaluation because of its effectiveness in helping them identify and understand usability problems, their causes, and solutions.
Many HCI practices, such as the employment of usability specifications and various kinds of evaluation, have been gathered under the banner of usability engineering (Nielsen, 1993). This is a good appellation because it includes a concern for cost in the notion of discount usability methods (Nielsen, 1989), the practical goal of achieving specifications and not perfection, and techniques for managing the process. The latter is important because iterative processes are sometimes perceived by management as ''going around in circles," which is not attractive to a manager with a limited budget and dwindling production schedule.
Usability specifications provide this essential management control for the iterative process. The quantitative usability data are analyzed in each iteration, and the results are compared with the usability specifications, allowing management to decide if iteration can stop. If the specifications are not met, data are assessed to weigh cost and severity or importance of each usability problem, assigning a priority ranking for designing and implementing solutions to those problems that, when fixed, will give the largest improvement in usability for the least cost.
Almost any software package that provides support for the interface development process can be called an interface development tool, a generic term referring to anything from a complete interface development environment to a single library routine (Myers, 1989, 1993). New software tools for user interface development are appearing with increasing frequency.
Interface development tools can be divided into at least four types (Hix and Hartson, 1993a). Toolkits are libraries of callable routines for low-level interface features and are often integrated with window managers (e.g., X, Windows) Interface style support tools are interactive systems that enforce a particular interface style and/or standards (e.g., OSF Motif, Common User Access). User interface management systems (UIMs) are development environments that can include both prototyping and run-time support, with the goal of allowing developers to produce an interface
implementation without using traditional programming languages. Of these groups, the UIMSs perhaps are the most interesting, have the most potential, and suffer the most difficult technical problems (Myers, 1995).
These first three categories of development tools primarily address user interface software. A fourth category, interaction development tools, provides interactive support for user interaction development. Of all the interaction development activities, the one most commonly supported by tools in this group is formative evaluation (Macleod and Bevan, 1993; Hix and Hartson, 1994).
Although tools now exist on many programming platforms to lay out objects of a user interface quickly and easily, usability problems are not necessarily addressed by adding this kind of technology to the process; many interface development tools are potentially a faster way to produce poor interfaces.
Economic justification for usability effort in interactive system development is now beginning to be established (Bias and Mayhew, 1994). Broad acceptance in business and industry requires further demonstration of a return on investment; documented cases and success stories are essential. The bottom line is that usability engineering does not add to overall cost, for two reasons: (1) usability does not add as much cost to the development process as many people think, and (2) good usability saves many other costs.
Considering cost added to the process, one must realize that any added cost is confined. Interaction development is a small part of total system development. It occurs early in the process, when the cost of making changes is still relatively low, and mainly impacts only a prototype, not the final system software.
Considering the cost savings attributable to good usability, it is easy to establish that poor usability is costly and that good usability is all about lowering costs. Usability is simply good business. The most expensive operational item in an interactive system is the user. People who develop software are concerned with the cost of development, but the people who buy and use a software application are concerned with the costs of usage. Development costs are mostly one-time costs, while operational costs-such as training, productivity losses, help desks and field support, recovery from user errors, dissatisfied employees, and system maintenance costs (the cost of trying to fix problems after release)-accrue for years.
Unless the net of analysis is cast broadly enough, the problem with cost-benefit analysis is that one group pays development costs and another group gets the benefits. People who purchase computer systems
are asking which costs more: user-based tasks that are quick, efficient, and accurate or errorprone tasks that take more time? Confused users or confident and competent users?
Beyond this kind of argumentation, used in software engineering for years, substantial measurable economic advantage can be accrued from usability. Case studies have demonstrated that large sums of real money can be saved by increasing user (employee) productivity alone (Bias and Mayhew, 1994). In the end, these are the cases that will make the difference.
HCI is a relatively young and broadly diverse field with a rapidly growing impact on the world of computing. Usability, especially in every-citizen interfaces, is becoming recognized as crucial for the national information infrastructure. The future of HCI in this context can be viewed from a perspective of product and process.
A rich part of the future of HCI is in its application areas, which are growing more rapidly than the HCI methods needed for their development. As an example, it is unlikely that usability methods developed for desktop applications will apply directly to virtual environments, one of the most exciting areas of applications development. Despite intense and widespread research in virtual environments, very little work has been applied toward developing the usability methods that will be required to evaluate this new technology-a necessary coupling if virtual environments are to reach their full potential. Similarly, groupware and computer-supported cooperative work (Baecker, 1993; Grudin, 1994), multimedia (Blattner and Dannenberg, 1992), hypermedia, and interface access for the disabled or impaired persons (Williges and Williges, 1995) will require development of new methods for design and usability evaluation. Educational technology for the classroom, the World Wide Web, and the home is emerging as a giant application area. Perhaps nowhere is usability more important than in the discipline of education, where understanding and communication of concepts and ideas are the stock and trade.
Finally, the Internet, the World Wide Web, and cyberspace are incredibly fast-growing application domains bringing new kinds of usability challenges. The World Wide Web is a technological and sociological frontier with many analogies to the frontier that was the American West over a century ago-lawlessness and disorganization, with exploration and expansion in every direction.
Studies show that users having trouble with an interactive system often
cannot find solutions in user manuals or from on-line help; they are more likely to ask a friend, colleague, or co-worker for help. This strategy can work in a local setting where there are other users. However, users of the national information infrastructure will often be remote and distributed, using a network as their work setting. For these isolated users, who are less able to tolerate poor user interfaces and who will abandon applications they find too difficult to learn and use, there is often no one to ask when things go wrong at the computer, and usability will have a large impact on their productivity and satisfaction. For this large-scale environment with its diversity of user types and characteristics, its variety of application types, and potential user isolation, usability takes on special importance.
Additionally, the interaction styles and techniques of future products can be expected to expand beyond the currently ubiquitous WIMP-windows, icons, menus, pointers-or desktop-style interface. While WIMP interfaces have provided a great step forward for interfaces in static situations (e.g., word processing, spreadsheets), innovative interaction techniques that go beyond the WIMP paradigm are necessary to meet user interface needs of demanding, real-time, high-performance applications such as those found in military applications, medical systems, "smart road" applications, and so on. Researchers are promoting a greatly expanded vision of interaction beyond the limited interaction styles now available via just keyboard and mouse, including extensions to current work in graphic and visual displays (Mullet and Sano, 1995), use of hands and feet (Buxton, 1986), eye movement (Jacob, 1993), haptic (touch) feel and force feedback (Baecker et al., 1995), audio and sound (Brewster et al., 1993; Gaver and Smith, 1995), voice (McCauley, 1984), and stylus and gesture (Goldberg and Goodisman, 1995).
Finally, many technology forecasters have predicted that the most significant area of future applications may be computing embedded in appliances, homes, offices, vehicles, and roads. Sometimes called wearable computers, these devices can be strapped to one's wrist or embedded in a shoe! A recent television news feature (CNN News, July 1996) described a project at Massachusetts Institute of Technology in which a pair of shoes will, indeed, be instrumented so that, as the wearer gets milk out for breakfast, sensors will note that the milk supply is getting low! Approaching the grocery store on the way home, the system speaks via a tiny earphone to remind the shoe's wearer of the need to pick up some milk.
The requirements for usability of desktop and other familiar systems will pale in comparison to the importance of usability in this new era of computing. That "every citizen" will not tolerate training courses, user manuals, or on-line help to operate everyday objects such as refrigerators and automobiles will compel designers to take seriously their responsibility
for usability. Issues of social impact carry high risks if this kind of every-citizen interface is threatening, intimidating, or difficult to use. In successful designs the computing component will be transparent, with users not even thinking of themselves as users of computers. When human factors was first adapted to user interfaces (e.g., Williges et al., 1987), ergonomics was largely filtered out. Interestingly, new devices, combining hardware, software, and firmware as "appliances," will require a reintegration of ergonomics as a part of usability.
Developers of future HCI processes will struggle to keep pace with these new application areas and interaction styles. One area that is already changing among real-world system developers is the representation of roles and skills in interactive system development teams. Usability specialists, human factors engineers, and HCI practitioners are starting to take their long-overdue places alongside systems analysts and software engineers. These new roles imply the need for new kinds of training in HCI methods. These roles have already begun to be joined by those with technical writing and documentation skills and especially by those with graphics and visual design skills (Tufte, 1983; Mullet and Sano, 1995)-for example, to use color effectively (Shubin et al., 1996) and to design icons, avatars, and rendered images.
It is also expected that a significant increase in future HCI activity will be applied to developing new methods. There is an ongoing need for new high-impact usability evaluation methods. High impact means cost effective, applicable to a wide variety of application types (e.g., World Wide Web applications), applicable to many new interaction styles (e.g., virtual environments), and suitable for gathering usability data from remote and distributed user communities.
Among the approaches to remote evaluation emerging now, most are either limited to subjective user feedback (Abelow, 1993) or require expensive bandwidth to support video conferencing as an extension of the usability lab (Hammontree et al., 1994). A method based on user-assisted critical incident gathering (Hartson et al., 1996) has been proposed to bypass the bandwidth requirements for full-time video transmission and to cut analysis costs.
Methods and software support tools are also in demand for boosting return on investment of resources committed to usability evaluation. Koenemann-Belliveau et al. (1994, p. 250) have articulated this need: "We should also investigate the potential for more efficiently leveraging the work we do in empirical formative evaluation-ways to 'save' something from our efforts for application in subsequent evaluation work." Most of
the time results from usability evaluation are applied only to specific usability problems in a single design. Database tools for information management of the results would accrue immediate gains in effective usability problem reporting (Jeffries, 1994; Pernice and Butler, 1995). More significantly, a usability database tool would afford some "memory" to the process, amortizing, through reuse of analysis, the cost of results across design iterations and across multiple products and projects. Beyond organizational boundaries, a collective usability database could serve as a commonly accessible repository of a science base for the HCI community and as a practical knowledge base for exemplar usability problems, solutions, and costs.
The future of HCI is both exciting and challenging. In moving beyond GUIs and in developing new methods, problems continue to increase. But the promise of these new products and processes will come to fruition in an every citizen interface for the national information infrastructure.
Many thanks to Dr. Deborah Hix, of Virginia Tech, for her help in providing inputs and in reading this paper.
Abelow, D. (1993). Automating Feedback on Software Product Use. CASE Trends, 15-17.
Baecker, R. M. (Ed.). (1993). Readings in Groupware and Computer-Supported Cooperative Work: Assisting Human-Human Collaboration. San Francisco: Morgan-Kaufmann.
Baecker, R. M., Grudin, J., Buxton, W. A. S., and Greenberg, S. (1995). Touch, Gesture, and Marking. In R. M. Baecker, J. Grudin, W. A. S. Buxton, and S. Greenberg (Eds.), Readings in Human-Computer Interaction: Toward the Year 2000. San Francisco: Morgan-Kaufmann, 469-482.
Barnard, P. (1993). The Contributions of Applied Cognitive Psychology to the Study of Human-Computer Interaction. In R. M. Baecker, J. Grudin, W. A. S. Buxton, and S. Greenberg (Ed.), Readings in Human Computer Interaction: Toward the Year 2000. San Francisco: Morgan-Kaufmann, 640-658.
Bias, R. G. and Mayhew, D. J. (Eds.). (1994). Cost Justifying Usability. Boston: Academic Press.
Blattner, M. M. and Dannenberg, R. B. (Eds.). (1992). Multimedia Interface Design. New York: ACM Press.
Bodker, Susanne. (1991). Through the Interface: A Human Activity Approach to User Interface Design. Hillsdale, NJ: Lawrence Erlbaum Associates.
Brewster, S. A., Wright, P. C., and Edwards, A. D. N. (1993). An Evaluation of Earcons for Use in Auditory Human-Computer Interfaces. Proceedings of INTERCHI Conference on Human Factors in Computing Systems. New York: ACM Press, 222-227.
Butler, K. A. (1996). Usability Engineering Turns 10. Interactions (January), 58-75.
Buxton, W. (1986). There's More to Interaction than Meets the Eye: Some Issues in Manual Input. In D. A. Norman and S. W. Draper (Ed.), User Centered System Design. Hillsdale, NJ: Lawrence Erlbaum Associates, 319-337.
Card, S. K. and Moran, T. P. (1980). The Keystroke-Level Model for User Performance Time with Interactive Systems. Communications of the ACM, 23, 396-410.
Card, S. K., Moran, T. P., and Newell, A. (1983). The Psychology of Human-Computer Interaction. Hillsdale, NJ: Lawrence Erlbaum Associates.
Carroll, J. M. (1984). Minimalist Design for Active Users. Proceedings of Human-Computer Interaction-Interact '84, September, Amsterdam: North-Holland, 39-44.
Carroll, J. M. (Ed.). (1995). Scenario-Based Design: Envisioning Work and Technology in System Development. New York: John Wiley and Sons, Inc.
Carroll, J. M., Kellogg, W. A., and Rosson, M. B. (1991). The Task-Artifact Cycle. In J. M. Carroll (Ed.), Designing Interaction: Psychology at the Human-Computer Interface. Cambridge, England: Cambridge University Press, 74-102.
Carroll, J. M. and Rosson, M. B. (1992). Getting Around the Task-Artifact Cycle: How to Make Claims and Design by Scenario. ACM Transactions on Information Systems, 10, 181-212.
Chin, J. P., Diehl, V. A., and Norman, K. L. (1988). Development of an Instrument Measuring User Satisfaction of the Human-Computer Interface. Proceedings of CHI Conference on Human Factors in Computing Systems, May 15-19, New York: ACM, 213-218.
Diaper, D. (Ed.). (1989). Task Analysis for Human-Computer Interaction. Chichester, England: Ellis Horwood Limited.
Draper, S. W. and Barton, S. B. (1993). Learning by Exploration and Affordance Bugs. Proceedings of INTERCHI Conference on Human Factors in Computing Systems (Adjunct). New York: ACM, 75-76.
Ehn, P. (1990). Work Oriented Design of Computer Artifacts (2nd Ed.). Hillsdale, NJ: Lawrence Erlbaum Associates.
Foley, J. D. and Wallace, V. L. (1974). The Art of Natural Graphic Man-Machine Conversation. Proceedings of the IEEE, 63(4), 462-471.
Foley, J. D., van Dam, A., Feiner, S. K., and Hughes, J. F. (1990). Computer Graphics: Principles and Practice. Reading, MA: Addison-Wesley.
Gaver, W. W. and Smith, R. B. (1995). Auditory Icons in Large-Scale Collaborative Environments. In R. M. Baecker, J. Grudin, W. A. S. Buxton, and S. Greenberg (Eds.), Readings in Human-Computer Interaction: Toward the Year 2000. San Francisco: Morgan-Kaufmann, 564-569.
Goldberg, D. and Goodisman, A. (1995). Stylus User Interfaces for Manipulating Text. In R. M. Baecker, J. Grudin, W. A. S. Buxton, and S. Greenberg (Eds.), Readings in Human-Computer Interaction: Toward the Year 2000. San Francisco: Morgan-Kaufmann, 500-508.
Gould, J. D., Boies, S. J., and Lewis, C. (1991). Making Usable, Useful, Productivity-Enhancing Computer Applications. Communications of the ACM, 34(1), 74-85.
Gray, W. D., John, B. E., Stuart, R., Lawrence, D., and Atwood, M. (1990). GOMS Meets the Phone Company: Analytic Modeling Applied to Real-World Problems. Proceedings of INTERACT '90-Third IFIP Conference on Human-Computer Interaction, August 27-31, Amsterdam: North-Holland Elsevier Science Publishers.
Grudin, J. (1994). Groupware and Social Dynamics: Eight Challenges for Developers. Communications of the ACM, 37(1), 92-105.
Hammond. N., Gardiner, M. M., Christie, B., and Marshall, C. (1987). The Role of Cognitive Psychology in User-Interface Design. In M. M. Gardiner and B. Christie (Eds.), Applying Cognitive Psychology to User-Interface Design. Chichester: Wiley, 13-53.
Hammontree, M., Weiler, P., and Nayak, N. (1994). Remote Usability Testing. Interactions (July), 21-25.
Harrison, M. and Thimbleby, H. (Ed.). (1990). Formal Methods in Human-Computer Interaction. Cambridge, England: Cambridge University Press.
Hartson, H. R. and Gray, P. D. (1992). Temporal Aspects of Tasks in the User Action Notation. Human-Computer Interaction, 7, 1-45.
Hartson, H. R. and Hix, D. (1989). Toward Empirically Derived Methodologies and Tools for Human-Computer Interface Development. International Journal of Man-Machine Studies, 31, 477-494.
Hartson, H. R., Siochi, A. C., and Hix, D. (1990). The UAN: A User-Oriented Representation for Direct Manipulation Interface Designs. ACM Trans. Inf. Syst., 8(3), 181-203.
Hartson, H. R., Castillo, J. C., Kelso, J., Kamler, J., and Neale, W. C. (1996). Remote Evaluation: The Network as an Extension of the Usability Laboratory. Proceedings of CHI Conference on Human Factors in Computing Systems. New York: ACM, 228-235.
Hix, D. and Hartson, H. R. (1993a). Developing User Interfaces: Ensuring Usability Through Product and Process. New York: John Wiley and Sons.
Hix, D. and Hartson, H. R. (1993b). Formative Evaluation: Ensuring Usability in User Interfaces. In L. Bass and P. Dewan (Eds.), Trends in Software, Volume 1: User Interface Software. New York: Wiley, 1-30.
Hix, D. and Hartson, H. R. (1994). IDEAL: An Environment for User-Centered Development of User Interfaces. Proceedings of EWHCI'94: Fourth East-West International Conference on Human-Computer Interaction, 195-211.
Hutchins, E. L., Hollan, J. D., and Norman, D. A. (1986). Direct Manipulation Interfaces. In D. A. Norman and S. W. Draper (Eds.), User Centered System Design. Hillsdale, NJ: Lawrence Erlbaum Associates, 87-124.
Jacob, R. J. K. (1993). Eye-Movement-Based Human-Computer Interaction Techniques: Toward Non-Command Interfaces. In H. R. Hartson and D. Hix (Eds.), Advances in Human-Computer Interaction. Norwood, NJ: Ablex, 151-190.
Jeffries, R. (1994). Usability Problem Reports: Helping Evaluators Communicate Effectively with Developers. In J. Nielsen and R. L. Mack (Eds.), Usability Inspection Methods. New York: John Wiley and Sons, Inc., 273-294.
Kieras, D. E. (1988). Towards a Practical GOMS Model Methodology for User Interface Design. In M. Helander (Ed.), Handbook of Human-Computer Interaction. Elsevier Science Publishers B. V., 135-157.
Kieras, D. and Polson, P. G. (1985). An Approach to the Formal Analysis of User Complexity. International Journal of Man-Machine Studies, 22, 365-394.
Koenemann-Belliveau, J., Carroll, J. M., Rosson, M. B., and Singley, M. K. (1994). Comparative Usability Evaluation: Critical Incidents and Critical Threads. Proceedings of CHI Conference on Human Factors in Computing Systems. New York: ACM, 245-251.
LeCompte, M. D. and Preissle, J. (1993). Ethnography and Qualitative Design in Educational Research (2nd Ed.). San Diego: Academic Press.
Lewis, C., Polson, P., Wharton, C., and Rieman, J. (1990). Testing a Walkthrough Methodology for Theory-Based Design of Walk-up-and-Use Interfaces. Proceedings of CHI Conference on Human Factors in Computing Systems, April 1-5, New York: ACM, 235-242.
MacKenzie, S. (1992). Fitts' Law as a Research and Design Tool in Human-Computer Interaction. Human-Computer Interaction, 7, 91-139.
MacLean, A., Young, R. M., Bellotti, V. M. E., and Moran, T. P. (1991). Questions, Options, and Criteria: Elements of Design Space Analysis. Human-Computer Interaction, 6, 201-250.
Macleod, M. and Bevan, N. (1993). Music Video Analysis and Context Tools for Usability Measurement. Proceedings of INTERCHI Conference on Human Factors in Computing Systems, April 24-29. New York: ACM, 55.
Mayhew, D. J. (1992). Principles and Guidelines in Software User Interface Design. Englewood Cliffs, NJ: Prentice-Hall.
McCauley, M. E. (1984). Human Factors in Voice Technology. In F. A. Muckler (Ed.), Human Factors Review. Santa Monica, CA: Human Factors Society, 131-166.
Meister, D. (1985). Behavioral Analysis and Measurement Methods. New York: Wiley.
Moran, T. P. (1981). The Command Language Grammar: A Representation for the User Interface
of Interactive Computer Systems. International Journal of Man-Machine Studies, 15, 3-51.
Muller, M. J. (1991). PICTIVE-An Exploration in Participatory Design. Proceedings of CHI Conference on Human Factors in Computing Systems, April 27-May 2. New York: ACM, 225-231.
Mullet, K. and Sano, D. (1995). Designing Visual Interfaces. Mountain View, CA: SunSoft Press.
Myers, B. A. (1989). User-Interface Tools: Introduction and Survey. IEEE Software, 6 (1), 15-23.
Myers, B. A. (1993). State of the Art in User Interface Software Tools. In H. R. Hartson and D. Hix (Eds.), Advances in Human-Computer Interaction. Norwood, NJ: Ablex.
Myers, B. A. (1995). State of the Art in User Interface Software Tools. In R. M. Baecker, J. Grudin, W. A. S. Buxton, and S. Greenberg (Eds.), Readings in Human-Computer Interaction: Toward the Year 2000. San Francisco: Morgan-Kaufmann, 323-343.
Nielsen, J. (1989). Usability Engineering at a Discount. In G. Salvendy and M. J. Smith (Eds.), Designing and Using Human-Computer Interfaces and Knowledge-Based Systems. Amsterdam: Elsevier Science Publishers, 394-401.
Nielsen, J. (1992). Finding Usability Problems Through Heuristic Evaluation. Proceedings of CHI Conference on Human Factors in Computing Systems, May 3 - 7. New York: ACM, 373-380.
Nielsen, J. (1993). Usability Engineering. San Diego: Academic Press. Inc.
Nielsen, J. and Mack, R. L. (Ed.). (1994). Usability Inspection Methods. New York: John Wiley and Sons.
Nielsen, J. and Molich, R. (1990). Heuristic Evaluation of User Interfaces. Proceedings of CHI Conference on Human Factors in Computing Systems, April 1-5. New York: ACM, 249-256.
Norman, D. A. (1986). Cognitive Engineering. In D. A. Norman and S. W. Draper (Eds.), User Centered System Design. Hillsdale, NJ: Lawrence Erlbaum Associates, 31-61.
Norman, D. A. and Draper, S. W. (Eds.). (1986). User Centered System Design: New Perspectives on Human-Computer Interaction. Hillsdale, NJ: Lawrence Erlbaum Associates.
Payne, S. J. (1991). Display-based action at the user interface. International Journal of Man-Machine Studies, 35, 275-289.
Pernice, K. and Butler, M. B. (1995). Database Support for Usability Testing. Interactions (January), 27-31.
Shneiderman, B. (1983). Direct Manipulation: A Step Beyond Programming Languages. IEEE Transactions on Computers, 16(8), pp. 57-69.
Shneiderman, B. (1992). Designing the User Interface: Strategies for Effective Human-Computer Interaction (2nd Ed.). Reading, MA: Addison-Wesley.
Shubin, H., Falck, D., and Johansen, A. G. (1996). Exploring Color in Interface Design. Interactions (July/August), 36-48.
Smith, S. L. and Mosier, J. N. (1986). Guidelines for Designing User Interface Software (ESD-TR-86-278/MTR 10090). MITRE Corporation.
Tufte, E. R. (1983). The Visual Display of Quantitative Data. Cheshire, CT.: Graphics Press.
Virzi, R. A., Sokolov, J. L., and Karis, D. (1996). Usability Problem Identification Using Both Low- and High-Fidelity Prototypes. Proceedings of CHI Conference on Human Factors in Computing Systems. New York: ACM, 236-243.
Weller, H. G. and Hartson, H. R. (1992). Metaphors for the Nature of Human-Computer Interaction in an Empowering Environment: Interaction Style Influences the Manner of Human Accomplishment, 8(3), 313-333.
Wharton, C., Bradford, J., Jeffries, R., and Franzke, M. (1992). Applying Cognitive Walkthroughs to More Complex User Interfaces: Experiences, Issues, and Recommendations. Proceedings of CHI Conference on Human Factors in Computing Systems, May 3 - 7, New York: ACM, 381-388.
Whiteside, J., Bennett, J., and Holtzblatt, K. (1988). Usability Engineering: Our Experience and
Evolution. In M. Helander (Ed.), Handbook of Human-Computer Interaction. Amsterdam: Elsevier North-Holland, 791-817.
Williges, R. C. and Williges, B. H. (1995). Travel Alternatives for the Mobility Impaired: The Surrogate Electronic Traveler (SET). In A. D. N. Edwards (Ed.), Extra-Ordinary Human-Computer Interaction: Interfaces for Users with Disabilities. New York: Cambridge University Press, 245-262.
Williges, R. C., Williges, B. H., and Elkerton, J. (1987). Software Interface Design. In G. Salvendy (Ed.), Handbook of Human Factors. New York: John Wiley and Sons, 1416-1449.
Wixon, D., Holtzblatt, K., and Knox, S. (1990). Contextual Design: An Emergent View of System Design. Proceedings of CHI Conference on Human Factors in Computing Systems, April 1-5. New York: ACM, 329-336.
Representative supplementary references here are chosen for breadth. Cited references also are representative but not all are repeated here.
ACM. (1990). Resources in Human-Computer Interaction. New York: ACM Press.
Baecker, R. M. and Buxton, W. A. S. (Eds.). (1987). Readings in Human-Computer Interaction: A Multidisciplinary Approach. San Francisco, Morgan-Kaufmann.
Baecker, R. M., Grudin, J., Buxton, W. A. S., and Greenberg, S. (Eds.). (1995). Readings in Human-Computer Interaction: Toward the Year 2000. San Francisco: Morgan-Kaufmann.
Carroll, J. M. (Ed.). (1987). Interfacing Thought: Cognitive Aspects of Human-Computer Interaction. Cambridge, MA: The MIT Press.
Carroll, J. M. (Ed.). (1991). Designing Interaction: Psychology at the Human-Computer Interface. Cambridge, England: Cambridge University Press.
Hartson, H. R. and Hix, D. (1989). Human-Computer Interface Development: Concepts and Systems for Its Management. ACM Comput. Surv., 21(1), 5-92.
Helander, M. (Ed.). (1988). Handbook of Human-Computer Interaction. Amsterdam: North-Holland.
Monk, A. F. and Gilbert, N. (1995). Perspectives on HCI-Diverse Approaches. Cambridge, England: Cambridge University Press.
Olson, J. R. and Olson, G. M. (1990). The Growth of Cognitive Modeling in Human-Computer Interaction Since GOMS. Human-Computer Interaction, 5, 221-265.
Preece, J., Rogers, Y., Sharp, H., Benyon, D., Holland, S., and Carey, T. (1994). Human-Computer Interaction. Reading, MA: Addison-Wesley.
Rubin, J. (1994). Handbook of Usability Testing. New York: John Wiley and Sons.
Thimbleby, H. (1990). User Interface Design. New York: ACM Press/Addison-Wesley.
There was a problem loading page 241.