Page 72

and machine vision; auditory and touch-based output; interfaces that combine multiple modes of input and output; and visual displays, including immersive or virtual reality systems. Because the ECI challenge involves connecting to the information infrastructure, rather than just to stand-alone systems, this chapter reviews the current status of and research challenges for interfaces for systems in large-scale national networks. The chapter ends with the steering committee's conclusions, based on workshop discussions and other inputs, about the research priorities to advance these technologies and our understanding of how to use them to support every citizen.

Framing The Input/Output Discussion-Layers Of Communication

The interface is the means by which a user communicates with a system, whether to get it to perform some function or computation directly (e.g., compute a trajectory, change a word in a text file, display a video); to find and deliver information (e.g., getting a paper from the Web or information from a database); or to provide ways of interacting with other people (e.g., participate in a chat group, send e-mail, jointly edit a document). As a communications vehicle, interfaces can be assessed and compared in terms of three key dimensions: (1) the language(s) they use, (2) the ways in which they allow users to say things in the language(s), and (3) the surface(s) or device(s) used to produce output (or register input) expressions of the language. The design and implementation of an interface entail choosing (or designing) the language for communication, specifying the ways in which users may express ''statements" of that language (e.g., by typing words or by pointing at icons), and selecting device(s) that allow communication to be realized-the input/output devices.

Box 3.1 gives some examples of choices at each of these levels. Although the selection and integration of input/output devices will generally involve hardware concerns (e.g., choices among keyboard, mouse, drawing surfaces, sensor-equipped apparel), decisions about the language definition and means of expression affect interpretation processes that are largely treated in software. The rest of this section briefly describes each of the dimensions and then examines how they can be used to characterize some currently standard interface choices; the remainder of the chapter provides an examination of the state of the art.

Language Contrasts and Continuum

There are two language classes of interest in the design of interfaces: natural languages (e.g., English, Spanish, Japanese) and artificial languages

Page 73

BOX 3.1 Layers of Communications

1.

Language Layer

 

Natural language: complex syntax, complex semantics (whatever a human can say)

 

Restricted verbal language (e.g., operating systems command language, air traffic control language): limited syntax, constrained semantics

 

Direct manipulation languages: objects are "noun-like," get "verb equivalents" from manipulations (e.g., drag file X to Trash means ''erase X"; drag message onto Outgoing Mailbox means "send message"; draw circle around object Y and click means "I'm referring to Y, so I can say something about it.")

2.

Expression Layer

 

Most of these types of realization can be used to express statements in most of the above types of languages. For instance, one can speak or write natural language; one can say or write a restricted language, such as a command-line interface; and one can say or write/draw a direct manipulation language.

 

Speaking: continuous speech recognition, isolated-word speech recognition

 

Writing: typing on a keyboard, handwriting

 

Drawing

 

Gesturing (American Sign Language provides an example of gesture as the realization (expression layer choice) for a full-scale natural language.)

 

Pick-from-set: various forms of menus

 

Pointing, clicking, dragging

 

Various three-dimensional manipulations-stretching, rotating, etc.

 

Manipulations within a virtual reality environment-same range of speech, gesture, point, click, drag, etc., as above, but with three dimensions and broader field of view

 

Manipulation unique to virtual reality environment-locomotion (flying through/over things as a means of manipulating them or at least looking at them)

3.

Devices

 

Hardware mechanisms (and associated device-specific software) that provide a way to express a statement. Again, more than one technology at this layer can be used to implement items at the layer above.

 

Keyboards (many different kinds of typing)

 

Microphones

 

Light pen/drawing pads, touch-sensitive screens, whiteboards

 

Video display screen and mouse

 

Video display screen and keypad (e.g., automated teller machine)

 

Touch-sensitive screen (touch with pen; touch with finger)

 

Telephone (audible menu with keypad and/or speech input)

 

Push-button interface, with different button for each choice (like big buttons on an appliance)

 

Joystick

 

Virtual reality input gear-glove, helmet, suit, etc.; also body position detectors



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement