Building Trustworthy Networked Systems of Embedded Computers
Users of networked systems of embedded computers (EmNets) will demand certain characteristics, including reliability, safety, security, privacy, and ease of use (usability). These features can be encapsulated in the term “trustworthiness.”1 Such features must be built into a system from the start; it is difficult, if not impossible, to add them in an adequate and cost-effective manner later on. A large challenge to adding these sorts of features to EmNets is the combination of an open system architecture with distributed control.
The need for high reliability in almost all EmNets is obvious, but how to ensure it is less obvious. Today’s techniques for designing reliable systems require knowledge of all components of a system—knowledge that cannot be ensured in the rapidly changing environments in which EmNets will be used. Testing mechanisms that apply to standard networks of computing devices may well fail to apply in the context of EmNets, where components may shut down to conserve power or may be limited in computing power or available bandwidth. These and other reliability questions will need to be studied if EmNets of the future are to be trusted.
Some EmNets may operate unattended and be used to control dangerous devices or systems that, through either normal or flawed opera-
tion, could lead to significant human, economic, or mission losses. Similar problems were encountered early on in manufacturing automation; here the systems are potentially larger, certainly more distributed, and operate in much less controlled environments. The constraints on EmNets—including long lifetimes, changes in constituent parts, and resource limitations—strain existing methods for evaluating and ensuring system safety. In addition, many EmNets will be operated—and perhaps even configured—by end users with little technical training. New designs may be needed that allow untrained users to operate these systems safely and effectively. Accidents related to software already are starting to increase in proportion to the growing use of software to control potentially dangerous systems (Leveson, 1995). Networking embedded systems together, as envisioned for many new applications, will only add to these problems by enabling a larger number of potentially more complex interactions among components—interactions that cannot be anticipated or properly addressed by system users. New system and software engineering frameworks are needed to deal with these problems and enhance the safety of EmNets.
Security and privacy will also be required in many systems. The amount of information that can be collected by EmNets is staggering, the variety is wide, and the potential for misuse is significant. Capabilities are needed to verify that the information cannot be compromised or used by those who have no right to it and/or to cope with the likelihood that misuse or other problems are going to occur. In addition, these systems will need to be protected from tampering and attacks mounted from outside the system. New networking technologies will introduce the potential for new types of attacks. Security can help with elements of reliability and safety as well since it involves not only satisfying objectives but also incorporates protective mechanisms.
Finally, EmNets need to be usable. The systems must be easy to learn, easy to use, and amenable to understanding, often at different levels of detail by different types of users. As these systems become more complex and open to more varieties of computer-mediated interaction, they need to be designed in such a way that end users and operators understand what a system is doing. Systems that violate users’ expectations lead to frustration at best and errors at worst; it will be important to keep user expectations in mind in design decisions as these systems become more complex and pervasive. In addition, many of these systems will not be directly used by individuals—rather, individuals will interact with EmNets in various contexts, often without realizing it. Understanding how such interactions will take place and what people’s conscious and even subconscious expectations might be is an additional challenge for usability design in EmNets.
The unique constraints on EmNets raise additional concerns; this chapter discusses the challenges inherent in designing EmNets to be reliable, safe, secure, private, and usable, and suggests the research needed to meet these challenges.
Reliability is the likelihood that a system will satisfy its behavioral specification under a given set of conditions and within a defined time period. The failure of a particular component to function at all is only one form of unreliability; other forms may result when components function in a way that violates the specified behavior (requirements). Indeed, a component that simply stops functioning is often the simplest to deal with, because such failure can be detected easily (by the other components or the user) and, often, isolated from the rest of the system. Far more difficult failure cases are those in which a component sends faulty information or instructions to other parts of the networked system (examples of so-called Byzantine faults); such a failure can contaminate all components, even those that (by themselves) are functioning normally.
Systems need to be designed with great care to address the expected failures. Because EmNets will often be unattended or operated by nonexpert users, operator intervention cannot be relied upon to handle most failures. Current failure models for distributed systems revolve around the ways in which individual components or communications infrastructure can fail (Schneider, 1993). Fault-tolerant designs of such systems generally assume that only a small number of failures of any type will occur. It is not at all clear that these models apply to EmNets, in which the individual components are assumed to be easily and inexpensively replaceable, and the usual mechanisms for detecting faults (such as a request for a keep-alive message) may be prohibitively expensive in terms of power or bandwidth or may generate false failure notifications (in the case of components that shut down occasionally to conserve power.) The development of techniques for fault-tolerant designs of systems in which the individual components are resource-bound and easily replaceable is an area ripe for investigation.
Nor are current techniques for verifying the reliability of design implementations readily applicable to EmNets. While significant work on the hardware verification of nontrivial systems dates back to at least the mid-1980s (see, for example, Hunt’s work on the FM8501 microprocessor (Hunt, 1994)), it is more appropriate for individual components and may not be applicable to EmNets. Each component, to be reliable, must correspond to its specification, and the overall system will be considered reliable if it adheres to the system specification. Experience has shown,
however, that merely confirming the reliability of individual components of a system is insufficient for understanding the behavior of the overall system. Existing methods for ensuring reliability are tied to tests of system implementations against the appropriate specification. It should be noted that testing traditionally occurs after design and implementation. While testing and validating complex designs after the fact tends to have more appeal than building in reliability and assurance from the beginning (which calls for greater rigor and costs more), it is an extremely difficult task that already consumes a large fraction of the overall expense, schedule, and labor of an engineering project. Microprocessor design teams typically allocate one validation person for every two designers, and the trend is toward parity with future designs. Many software projects report deploying one validation person for every software writer. Companies are investing heavily in testing because (1) shorter product development schedules no longer permit a small set of testers to work on a project for a long time, (2) the overall complexity of the designs is making it increasingly difficult to achieve the product quality necessary for introducing a new product, and (3) the volumes of product being shipped today make the possible expense of product recalls intolerable to most companies.
“If you didn’t test it, it doesn’t work” is a general validation philosophy that serves many hardware or software design projects well. The idea is that unless the designer has anticipated the many ways in which a product will be used and the validation team has tested them comprehensively, then any uses that were overlooked will be the first avenues of failure. But the problem is not as simple as listing the product’s features and checking them one by one (although that is indeed one aspect of normal validation). Design flaws that manifest themselves that simply are usually easy to detect. The more insidious product design flaws appear only when multiple product features are combined or exercised in unusual ways. The complexity of such situations hampers efforts to detect flaws in advance.
For EmNets, the challenge of testing every system feature against every possible real-world usage will be daunting, even for an accurately configured system in initial deployment. But what happens a few months later when the system owner begins to extend the system in ad hoc ways, perhaps upgrading some nodes and adding others supplied by another vendor? The central challenge to EmNet reliability is to extend today’s tools and validation methods—for example, the Willow project on survivable systems2 and Easel (Fisher, 1999), a simulator for modeling
For more information, see <http://www.cs.colorado.edu/serl/its/>.
unbounded systems,3 may offer insights—to the much more difficult scope of large-scale EmNets.
Reliability Research Topics Deserving Attention
The following research topics deserve attention:
Fault models and recovery techniques for EmNets that take into account their scale, long life, open architecture, distributed control aspects, and the replaceability of their components. Appropriate models of failure and how to deal with failures in systems that are distributed and have the scale, longevity, openness, and component characteristics of EmNets have yet to be investigated. Until such investigations take place it will be difficult to design reliable systems, much less test implementations of those designs. Such research should be linked to research into the computational models appropriate for such systems (see Chapter 5).
EmNet monitoring and performance-checking facilities. Over the past several decades, considerable research has gone into monitoring and system health management, but EmNets pose unique problems owing to their potential scale and reconfigurability and the scarcity of component energy.
Verification of EmNets’ correctness and reliability. The size and distributed nature of EmNets may preclude complete system testing outside of simulation. Advances in analysis and simulation techniques would increase confidence in cases where complete testing is virtually impossible before the system is used in the field.4
Safety refers to the ability of a system to operate without causing an accident or an unacceptable loss.5 Many EmNets (for example, a home entertainment system) will not present significant safety problems even if they fail, although such failures might frustrate or inconvenience users. Other failures may raise significant safety issues.
Safety and reliability do not necessarily go hand in hand. An unreliable system or component is not necessarily unsafe (for example, it may
For more information, see <http://www.cert.org/easel/easel_foundations.html>.
See Making IT Better (CSTB, 2000c) for a discussion of the limitations of the simulation of complex systems today.
“Accident” is not an engineering term; it is defined by society. In the aviation community, for example, the term “accident” is used to refer to the loss of the hull of an aircraft; anything else is considered an incident, even though human life may be at risk.
always fail into a safe state or an erroneous software output may not cause the system to enter an unsafe state, or a system that stops working may even decrease safety risks), whereas a highly reliable system may be unsafe (for example, the specified behavior may be unsafe or incomplete, or the system may perform unintended functions). Therefore, simply increasing the reliability of the software or system may have no effect on safety and, in some systems, may actually reduce safety. Reliability is defined in terms of conformance with a specification; accidents usually result from incorrect specifications.
Whether viewed as a constraint on, or a requirement of, the system design, safety concerns limit the acceptable design space. Like the other desirable characteristics addressed in this chapter, safety cannot effectively be added onto a completed design, nor can it be tested or measured “into” a design. Safety constraints need to be identified early on in the design process so that the system can be designed to satisfy them. Testing and measurement simply provide assurance on how effectively the design incorporates already-specified safety considerations.
Engineers have developed a range of techniques for ensuring system safety, many of which have been extended to systems with embedded computers; however, much more research is needed (Leveson, 1995) in this area, which has attracted comparatively little attention by computer science researchers. In system safety engineering, safety efforts start early in the concept development stage. The process involves identifying system hazards (i.e., system states that can lead to accidents or unacceptable losses), using them as the basis for writing system safety requirements and constraints, designing the system to eliminate the hazards and their effects, tracing any residual safety-related requirements and constraints that cannot be eliminated at the system level down to requirements and constraints on the behavior of individual system components (including software), and verifying that the efforts were successful.
EmNets introduce added difficulties to this process. They greatly increase the number of states and behaviors that must be considered and the complexity of the interactions among potentially large numbers of interconnected components. Although all large digital systems experience similar problems, EmNets are unusual in that many operate in real time and with limited direct human intervention. Often they are either unattended or managed by human operators who lack technical skills or are untrained. Furthermore, EmNets afford the possibility of more dynamic configuration than do many other types of systems. Many EmNets are likely to arise from ad hoc extensions of existing systems or from several systems tied together in ways unanticipated by the original designers.
Historically, many accidents have been attributed to operator error.
Indeed, a common reason for automating safety-critical systems (apart from increasing efficiency) is to eliminate operator error. Automation has done this, but it has also created a new type of error, sometimes called technology-induced human error. Many of these new errors are the result of what human factors experts have labeled technology-centered automation, whereby designers focus most of their attention on the mapping from software inputs to outputs, mathematical models of required functionality, and the technical details and problems internal to the computer. Little attention is usually given to evaluating software in terms of whether it provides transparent and consistent behavior that supports users in their monitoring and control tasks. Research on various types of system monitoring, including hierarchical monitoring and standards thereof, may prove useful here.
Without the kind of support mentioned previously, technology-centered automation has changed the reasons for accidents and the types of human error involved. Humans have not been eliminated from most high-tech systems, but their role has changed significantly: Often, they are monitors or high-level managers of the automation, which directly controls the system. On modern fly-by-wire aircraft, for example, all pilot commands to move the control surfaces go through a computer—there are no direct mechanical linkages. Automation designs seldom support the new roles humans are playing. And yet, when the inevitable human error results from what aircraft human factors experts have called clumsy automation (Wiener and Curry, 1980), the accident is blamed on the human rather than the system or automation design. All of the recent Airbus accidents and some of the recent Boeing accidents involved pilot confusion arising from the design of the automation (Leveson et al., 1997). Examples include mode confusion and the lack of situational awareness (both related to inadequate feedback, among other things), increased pilot workload during emergencies and high stress periods, automation and pilots fighting over control of the aircraft, increased amounts of typing, and pilot distraction. Human factors experts have tried to overcome clumsy automation by changing the human interface to the automation, changing user training, or designing new operational procedures to eliminate the new human errors resulting from poor automation design. These efforts have had limited success. Some have concluded that “training cannot and should not be the fix for bad design” (Sarter and Woods, 1995) and have called for more human-centered automation. Currently, however, coping mechanisms are required until such automation becomes more widespread.
If researchers can identify the automation features that lead to human error, they should be able to design the software in such a way that errors are reduced without sacrificing the goals of computer use, such as in-
creased productivity and efficiency. EmNets complicate the process of error reduction simply because of their increased complexity and the opacity of system design and operation. Today what can be automated easily is automated, leaving the rest for human beings. Often this causes the less critical aspect of performance to be automated, leaving to humans the more critical aspects. Worse, the systems often fail just when they are most needed—when conditions are complex and dangerous, when there are multiple failures, or when the situation is unknown. Unfortunately, if the routine has been automated, the human controller has been out of the loop, so that when the automated systems fail, it takes time for the human operator to regain a sense of the state, time that may not be available. EmNets increase the likelihood that human intervention will not be readily available. Approaches to automation should be changed from doing what is relatively easily achievable to doing what is most needed by human operators and other people affected by system behavior. This principle is, of course, applicable to more than just EmNets. The solution will need to incorporate the economic and institutional contexts as well as the technology.
Safety Research Topics Deserving Attention
Widespread use of EmNets will compound the existing challenges involved in designing safety into systems. These challenges will need to be addressed quickly to avoid future problems and to ensure that the potential of EmNets is effectively tapped. To address problems of safety in EmNets adequately, greatly expanded research will be needed in a number of areas, including the following:
Designing for safety. Safety must be designed into a system, including the human-computer interface and interaction. New design techniques will be required to enforce adherence to system safety constraints in EmNet behavior and eliminate or minimize critical user errors. In addition, designers often make claims about the independence of components and their failure modes to simplify the design process and make systems more amenable to analysis, but they lack adequate tools and methodologies for ensuring independence or generating alerts about unknown interdependencies. The system itself, or the design tools, will need to provide support for such capabilities. This may well require changes in the way computer scientists approach these sorts of problems as well as collaboration with and learning from others, such as systems engineers, who have addressed these issues in different domains.
Hazard analysis for EmNets. The deficiencies in existing hazard analysis techniques when applied to EmNets need to be identified. De-
signers and implementers of EmNet technology who may not necessarily be familiar with such techniques will need to understand them. Hazard analysis usually requires searching for potential sources of hazards through large system state spaces; EmNets will complicate this search process for the reasons already discussed. The results of hazard analysis are critical to the process of designing for safety and verifying that the designed and implemented system is safe.
Validating requirements. Most accidents related to software stem from requirements flaws—incorrect assumptions about the required behavior of the software and the operational environment. In almost all accidents involving computer-controlled systems, the software performed according to specification but the specified behavior was unsafe (Leveson, 1995; Lutz, 1993). Improved specification and analysis techniques are needed to deal with the challenges posed by EmNets. These techniques should take into account that user needs and therefore specifications will evolve.
Verifying safety. In regulated industries, and even in unregulated ones in which liability or costly recalls are a concern, special procedures are required to provide evidence that fielded systems will exhibit adequate levels of safety. EmNets greatly complicate the quest for such assurance, and new approaches will be needed as the complexity and potential number and variety of potential failure modes or hazardous system behaviors increase.
Ensuring safety in upgraded software. Even if the software is designed and assured to be safe in the original system context, software can be expected to change continually throughout the life of a system as new functionality is added and bugs are fixed. Each change will require assurances that safety has not been compromised, but because it will not be practical to redo a complete software system safety analysis for every change, new techniques will be needed to minimize the amount of effort required to verify safety when potential system and software design changes are proposed and to cope with the consequences of safety failures. Users can be expected to extend the system in ways unanticipated in the original design, adding new components, trying out new functions, and so on.6 In addition, the system and software design may become unsafe if there are unanticipated changes in the environment in which the
software is operating (a likely occurrence in a battlefield situation, for example). Methods are needed to audit the physical components of the system and the environment (including system operators) to determine whether the changes violate the assumptions underlying the hazard analysis. Approaches to software upgrades must address safety concerns in hardware components, too (for example, component audits could include calls to hardware components to validate their IDs).
Security relates to the capability to control access to information and system resources so that they cannot be used or altered by those lacking proper credentials. In the context of EmNets, security relates to controlled access to the subnetworks, the information stores, the devices that are interconnected, and the computing and communication resources of a given network. Many of the research issues that were raised with respect to safety in EmNets also apply to security. In addition, security analysis needs to assume that an adversary is actively trying to abuse, break, or steal from the system (an assumption not usually made for safety analysis.)
Security can be difficult to achieve in information systems of all types, but will perhaps be especially so in EmNets. Not only will the deployment of EmNets containing various sensor technologies allow the physical world to become more tightly interconnected with the virtual world, but the networking of embedded computers will also tend to increase the vulnerability of these systems by expanding the number of possible points of failure, tampering, or attack, making security analysis more difficult. The range of products into which processing and networking capabilities may be embedded will greatly expand the number of nodes at which security will need to be explicitly considered and influence the expectations at each node. Many of these nodes will consist of presumably ordinary everyday devices in which security is not currently a concern (thermostats, audio equipment, and so on); however, mischief will become an increasing risk factor. Their close connection to the physical world and interconnection with larger networks accessible by more people with unknown motives will make lapses of security potentially more damaging, increasing the risks associated with EmNets. In a military context, of course, the compromise of even fairly prosaic devices (such as food storage equipment or asset monitoring systems) that are part of a larger EmNet could have serious security implications.
EmNets’ configurations will be much more dynamic, even fluid, than typical networked systems. EmNet user interaction models may be quite different from those in traditional networks. These properties have sig-
nificant impact on security (and privacy). For example, as one moves from place to place, one’s personal area network may diffuse into other networks, such as might happen in a battlespace environment. Interactivity may not be under an individual’s direct control, and the individual may not understand the nature of the interactivity. Various nodes will engage in discovery protocols with entities in contexts they have never encountered before. Some EmNets may be homogeneous and their connectivity with other networks may be straightforward. In such cases, traditional network security techniques will suffice, with policy and protection methods executing in a gateway device. In heterogeneous, diffuse, fluid networks, traditional network security methods will not be effective. Rather, trust management and security policies and methods will be the responsibility of individual nodes and applications. This may put demands on the operating system (if any) that runs on those individual nodes. They may need to distinguish between secure operating modes and more permissive modes (especially during discovery, configuration, and update procedures).
Protecting System Boundaries
A key problem is how to protect the network from outside attack. The physical world has a number of well-understood and easily recognizable protective barriers and security structures. Retail stores, for example, have a physical structure to protect valuables. Even though these stores are open to the public, shoplifters can be thwarted by a well-defined notion of inside and outside and sensors used to overcome attempts to conceal goods. Such approaches have few analogues in the virtual world. Further, in the case of shoplifting, a risk management calculation is performed: that is, some level of security breach (shrinkage) is acceptable to merchants because absolute security would be unacceptable to customers. Risk management is also required for EmNets; however, calculating the risk is extremely challenging and variable because there are so many unknowns in these systems. The physical isolation of a network, together with extremely rigid and secure protocols for attaching terminals, is the only highly reliable method for protecting networked information systems from external threats (that is, attacks from outside hackers and others without access privileges), but this approach is not viable in many systems that need to be interconnected to be useful. In EmNets, physical boundaries and remoteness are effectively erased by the presence of sensors and network connectivity, and notions of entry and exit begin to fade. Except in physically isolated networks, the concepts of inside and outside generally do not exist. Yet this is one way in which users, and even designers, think about security problems—a mindset that, in itself, is
extremely problematic. Two further factors complicating the notion of inside versus outside are that components of EmNets will change over time (perhaps all of the components, many times, over the life of an EmNet) and that much of the communication will take place over wireless networks. The wireless aspects of EmNets make them prone to interference and jamming (intentional interference), which affect both reliability and security.
The most common way to establish boundaries between the inside and outside of a networked information system is to use firewalls that control communications at the juncture between two networks. Firewalls do not, however, establish true boundaries; they merely limit the exchange of packets between networks according to policies that are increasingly difficult to understand and assure, especially on networks that need to invite access by growing numbers of users, as in the case of so-called extranets. Although new technology, such as the suite of IPSec protocols,7 seems to offer opportunities to define boundaries (for example, virtual private networks), what it actually provides is access control. The controls apply to arcane objects (such as packet headers) that are difficult to understand for most users. Furthermore, it is almost impossible on most networks to understand all of the means by which objects may be stored or accessed, making the effectiveness of access controls unclear. In EmNets, the system perimeters are even more difficult than usual to define and may change over time. To the extent that EmNets are used over ever wider areas encompassing space (satellites), land, and ocean (seabed and submarines), between large numbers of vehicles, or spread throughout a large battleship, the difficulties of developing and implementing robust access controls will only grow.
Managing Scale and Complexity
The large scale and high degree of complexity in EmNets will further frustrate the attempt to identify boundaries and improve security because these characteristics will tend to make system security more difficult to analyze. What are the threats to a given EmNet? How are security risks evaluated? What should be the public policy regarding completion of a security threat analysis preceding deployment of an EmNet, if “deployment” can even be considered an actual, discrete event? It is becoming very difficult to offer even simple answers to these questions as the physi-
cal and logical connectivity of networks increases.8 Methods for evaluating threats and assessing security risks in complex systems whose elements are tightly coupled to physical-world artifacts are lacking. As recent events on the Internet indicate, some types of threats, such as denial-of-service attacks, have a high success rate, and many system users naively hope that the motivation for such attacks is slight.
The virtual world remains difficult to contain. Although cryptographic techniques enable engineers to build arbitrarily secure system components, assembling such elements into secure systems is a great challenge, and the computing research community does not yet understand the principles or possess the fundamental knowledge necessary to build secure systems of the magnitude necessitated by EmNets. It will be increasingly important to ensure that security issues are addressed at the outset of system design, so that notions of network isolation can be dealt with in a straightforward manner. Historically, however, networks are designed and often deployed before security issues are addressed. With many—perhaps most—EmNets, that sort of approach will result in problems. If security design is an afterthought, or a security hazard has already produced consequences, then the system is usually much too complex to even analyze from a security perspective. At present, it appears likely that systems whose evolvability is already hard to predict will be deployed without a full understanding of the security implications. This suggests both the need to accelerate relevant research and the need for coping and compensating strategies.
Mobile Code and Security
The use of mobile code in EmNets will create another potential vulnerability with implications for security.9 The networking of embedded computers allows for remote updates to the programs that run on those computers as well as the use of mobile code. If either capability is implemented, then the system is open to a significant security hazard—namely, that the code that eventually runs on these computers may not be code that is legitimately intended to be run on them. Furthermore, even if the code is legitimate, it may have unintentional security flaws. A number of mechanisms can be used to deal with this problem—examples include
These questions apply to the other elements of trustworthiness described in this chapter as well. The size, scale, and complexity of EmNets complicate issues of privacy, reliability, safety, and usability along with security.
Mobile code and its implications for self-configuration and adaptive coordination were discussed in Chapter 3.
secure boot loaders and secure class loaders that check code authenticators and compliance with security policies—but such mechanisms are not generally used in today’s embedded computers, let alone in conventional computing and communication systems. As embedded computers become networked, it will be necessary to deploy these and other features much more routinely.
Of course, EmNet resource constraints, whether of memory, computational capability, or power, will make it difficult to use some of these techniques in their current forms. Their use will also require deployment of the infrastructure necessary to support and maintain the policies by which these systems abide. In some cases this process will be straightforward, but in other others it will be far more complex. An automobile manufacturer, for instance, may be able to deploy tools comparatively easily that assure that code updates originate from the manufacturer. What is less clear is how to meet the challenge raised by open-air contexts, such as a battlespace, where there is less control over the environment and more opportunities for and likelihood of malicious activity.
Denial of Service
Denial-of-service attacks on EmNets could be of significant concern if they are widespread or involve safety-critical systems. Indeed, if society relies more on EmNets and allows them to be involved in many daily human activities, the invitation to disrupters grows. The wireless aspects of EmNets will be particularly susceptible to jamming attempts, for example. Denial-of-service attacks are very difficult to defend against if they are not anticipated in system design and taken into account in each system service protocol, at both high and low levels of communication. Because EmNets are often characterized by a lack of “excess” computing resources, extraneous requests, as found in flooding-based distributed attacks, will more easily swamp these systems. Moreover, they will often be constrained in terms of the power available to them, so the mere act of receiving requests in a denial-of-service attack can cause long-term damage to an EmNet, well beyond the duration of the attack. (For more traditional systems, denial of service is a transient attack; when the attack stops, the damage usually stops accumulating. This is not the case with battery-powered EmNets.)
The above observations may pose significant challenges to the design of high-integrity networks such as are found in the military. Traditional techniques that ensure the integrity of executables, such as credentialing and integrity checks, are subject to denial-of-service attacks in the form of very simple, otherwise innocuous, easily concealed, network-borne viruses that do little more than append themselves to files or memory im-
ages, invalidating credentials. Systems that rely on precise integrity techniques can turn out to be highly fragile. Certainly, operating-system-level techniques may be employed to thwart such denial-of-service attacks, but it remains to be seen how effective they will be.
Security Research Topics Deserving Attention
The security issues discussed above raise a number of research issues that need to be addressed, including the following:
Network access policies and controls. How does one devise, negotiate, deploy, and renew network access policies that address the various threats that may be of concern to a given EmNet? How can this be done in an environment in which the EmNet itself is reconfigured, often on an ad hoc basis? Access controls need to be devised that will be easily understood, able to protect the wide variety of information that may be collected under widely varying and often unforeseeable circumstances, and perhaps even self-configuring.
Enforcement of security policies. How should security policies be observed on individual network elements as well as on the network operating system? How are these policies devised and enforced when there are multiple “owners” of various parts of an EmNet?
Critical infrastructure self-defense. Mechanisms need to be identified that are useful for ensuring mobile code safety, defeating virus attacks, and preserving function in spite of the failure or compromise of one or more nodes. What types of safe operating modes can be devised that allow for the secure update of an EmNet, reducing the risk of attack while maintaining performance? This will be especially important for EmNets that control critical infrastructures and support military applications and battlespaces as well as for more civilian-oriented applications such as electric power systems, financial systems, and health-care systems.
Preventing denial-of-service attacks. Mechanisms are needed that preserve the inherent capacity to communicate over EmNets yet effectively defend against denial-of-service attacks.
Energy scarcity. Security in the face of energy scarcity is a significant challenge. New authentication and data integrity mechanisms are needed that require less communication overhead. It may be possible to exploit heterogeneity and asymmetry within the network to allow smaller system elements to do less than larger ones. Further, when there is redundancy in the EmNet, it may be possible to exploit the redundant components in order to detect outliers and possibly sabotaged nodes.
The anticipated broad deployment of EmNets in public spaces and private homes could allow the collection of considerable information about individuals. In many cases, individuals may be unaware of sensor networks deployed in the public spaces or commercial environments they enter and the associated information being collected about them. Even in their own homes, many users may be unaware of the types of information that embedded processors are collecting and possibly transmitting via networks to vendors or other recipients.10 The embedding of information technology into a growing number of devices will increase the amount of personal and personally identifiable information that can be collected, stored, and processed.
Achieving consensus on privacy and confidentiality policies continues to be a vexing problem and will only become more problematic as EmNets become more pervasive and interconnected. It should be noted that most of the issues involved here are not strictly technical but rather issues of public policy. The question is not necessarily, What can be done technologically but rather, What should or should not be done? The technical challenges lie in designing systems that facilitate support of the policies once they are decided.11,12
Consideration of the privacy implications of EmNets cannot be limited to these systems alone but must extend to the larger networks of more powerful computers to which EmNets connect. Information about transactions and events collected through networks of simple computers and sensors can be and is analyzed for links and correlations in much more powerful computers, both online and offline. It is these more powerful computer networks that can turn relatively innocuous data collected on EmNets into detailed data shadows that allow the reconstruction of complicated personal profiles. How, in the face of these prodigious capabilities, can systems provide anonymity whenever it is useful and appro-
priate? What are the limits of the protocols and technologies that assure anonymity and prevent linkages between events and transactions? With more and varied data being collected, it is becoming increasingly difficult to avoid the linking of these data and, more specifically, associations of data with real identities even when protocols that assure local anonymity are used.
Conceivably, policy-controlled, secure systems can collect data and policy-controlled, secure systems can dispense them. But who sets the policies, and who enforces them? Numerous legal and public policy questions need to be addressed. Who owns the personal data collected either with or without the knowledge of the person? Should ownership be negotiable? If so, how can people extract value from their own personal data in an equitable fashion? What is practical and enforceable in systems in which interactions are fleeting and take place very quickly? Can and should protocols be provided whereby people can exchange their data for other value, and how can people avoid being unfairly coerced? These are broad issues that are also applicable to the Internet. In the United States, regulation has limited the use of customer proprietary network information (CPNI) on telephone networks.13 Should there be similar limitations for other networks? Or will it be too difficult to define what is proprietary to the customer? How might the government gain access to such information, or should there be ways of protecting the information from access?
A related issue that will need to be resolved is how (and sometimes whether) to advise people when their actions are being monitored. Many EmNets, for example, will be difficult to detect, and users may be unaware that they are being tracked. This issue has already arisen in the context of electronic commerce, where consumers have expressed concern about the monitoring of their Web surfing and online purchasing. In most cases, consumers are unaware that their actions are being monitored, stored, and compiled into individual profiles even though individuals are usually aware that they are interacting with a system and are actively providing it data. EmNets may become so ubiquitous and so invisible that people are no longer aware that they are interacting with a networked system of computers and will often unknowingly and passively provide data. One part of the issue is notification: making people aware of the fact that they are being monitored. As experience with
See the Code of Federal Regulations, Title 47, Volume 3, Part 64 (GPO, 1998). In 1999 an appeals court vacated the FCC’s CPNI order on First Amendment grounds in US West v FCC, available at <http://www.fcc.gov/ogc/documents/opinions/1999/uswestcpni.html>. The Supreme Court let this ruling stand.
online profiling has demonstrated, however, notification is not a simple process. Many questions need to be answered. When should notification be mandatory? How can users be effectively signaled? Given individual differences in sensitivity and awareness, it may be difficult to provide adequate notification to some without annoying others. This may especially be the case in smart spaces, where all sorts of information may be collected and possibly linked to an individual. More research is needed to address issues like these.
Additional means may also be needed to control the disclosure of information. The issue of disclosure arises when information is collected for one purpose but used for other purposes (often referred to as mission creep). Disclosure is often provided in privacy policies for Web sites, but EmNets often involve more passive interactions in which disclosure is less convenient. For example, a smart space may collect information about an individual and provide it to others with the intention of providing a useful service, but the individual being probed may not be appreciative. Are there techniques that would allow users to control the flows of information about them? How can a user answer questions such as, Where is my information? Who has it? How did it get there? Who is responsible if something goes wrong? In addition, What conditions are needed so that users trust others not to misuse their data, and can EmNets be designed to engender an atmosphere of trust that is not due solely to ignorance of their existence in a given situation? Considerable work has begun on technologies that allow consumers to express privacy preferences14 and purveyors of intellectual property to control the dissemination of their work.15 However, these approaches are being developed in the context of Web-based electronic commerce; whether or not they are extendable to a broader set of EmNet-based applications is unclear.
It would seem to be very difficult for anyone to avoid giving up personal information to these networks. There are risks even when everyone’s intentions are well understood. It would be useful to have some general principles whereby the risk of inadvertent privacy violation can be minimized. These might include disposing of information as soon as possible after it is used; storing information near the point of use; and avoiding the removal of such data from local control whenever possible. Use of anonymity or pseudonymity and of protocols that prevent the linking of data sets could also be considered.
The fundamental issue is the ability of individuals to control the collection and dissemination of information about them in an environment in which daily transactions and events—and the events associated with their personal environment—involve EmNets or are controlled or monitored by them. Research is needed to better understand people’s expectations about their rights and abilities to exercise such control and resist intrusion. What are the expectations about privacy, and how are they evolving as people become more exposed to and familiar with various technologies? Can one outline the privacy rights that people either expect or legally possess, and can one identify ways in which different types of EmNets threaten those rights and run counter to those expectations? Conversely, as EmNets become ubiquitous, are there ways to use the technology to defend privacy rights, or will privacy necessarily be lost? As the FTC has recognized (Thibodeau, 2000), many privacy questions will need to be rethought in a world of increasing automation and instantaneous wireless communication. Both privacy expectations and case law are evolving. It will be necessary to clearly understand the trade-offs involved. EmNets have more of a propensity to be ubiquitous and enveloping, unavoidable in our environment, where individuals are not in control of their interaction. In these cases, privacy issues cannot be addressed by education and personal policies alone. Rather, they become (even more) a matter of public policy.16
Privacy As Related to Security
While security and privacy are very distinct properties, they are related (for example, security can provide mechanisms with which to protect privacy). Privacy is often said to involve the right or desire to be left alone. In the context of EmNets, it more often has to do with the right or intention of a person to keep certain personal information confidential. A breach of security may result in breach of privacy by someone without proper credentials who gains access to private information; a breach of privacy may also occur when information that is freely shared over a network is abused or when EmNets are deployed into various environments without notification, consent, or full disclosure. Breaches of security may also involve the dissemination, through an EmNet, of information that is intended to be shared for a narrow purpose but is used nonetheless for broader purposes because of an inability to precisely con-
CSTB anticipates a policy-oriented study on privacy in the information age to begin sometime in 2001. In addition, Chapter 5 of the CSTB report The Internet’s Coming of Age (CSTB, 2001) examines implications for broad public policy, including issues related to privacy and anonymity on the Internet.
trol data flows or the use of information collected for one purpose for a completely different purpose.
Security and privacy are related for another reason, too: both may be studied and understood in a given context by analyzing threats and risks. The security threats to a given network can be catalogued; countermeasures for those threats specified; and then residual risks of failure, oversight, and inadequacy identified. Similarly, the threats to privacy from the deployment or specific use of EmNets may be catalogued, means for protecting and preserving privacy specified, and the residual risks analyzed and managed. Privacy issues may be somewhat more challenging to deal with than security issues because they entail varying expectations and values and because access control practices often call for conveying personal information. Privacy seems far more malleable than security, because what counts as private is socially negotiated; privacy violations may occur when individuals have different understandings about the boundaries and contexts of privacy (this will be especially true with new technologies and where the technology moves information across multiple social contexts). Expectations are in flux, as the Internet is demonstrating that there is less privacy than may once have been assumed. Further, people differ with respect to the types of information they wish to keep private, the conditions under which they might allow access to different sorts of information (for example, health records, financial information, and online purchases), and the degree to which they value privacy.
Privacy Research Topics Deserving Attention
While the privacy issues discussed above raise many public policy questions, they also raise several technical research issues that need to be addressed. Both the policy and technical issues demand much additional research, but this research need not be EmNet-specific. In addition, while many of the policy and technical issues may not be directly applicable to defense and military situations, the need in such situations for identification (for example, friend or foe?) and for need-to-know classification of information make some of these points relevant. Privacy has largely been dealt with by advocacy, legal, and political processes; however, it will increasingly involve and require technical mechanisms and contextualizations. The committee strongly encourages additional research in the many policy issues surrounding privacy and makes the following recommendations with respect to technical concerns:
Flexible policy management. EmNets, and indeed all information systems, do implement some form of privacy policies. Often, however, this is by default not by design. Research is needed to develop a calculus
of privacy17 and ways to enable flexible, configurable privacy policies in systems so that as external situations or policies change, the system can be easily adjusted to reflect that. Systems should be designed to allow incorporating a wide range of potential privacy policies.
Informed consent. Implementing informed consent in technological systems is a difficult challenge. EmNets seem likely to make this problem that much harder. Owing to the passive and ubiquitous nature of many of these systems, users will often not be aware that information about them is being gathered. Notifying users who may not even be aware of the existence of the EmNet is a difficult problem. Even more difficult is acquiring meaningful informed consent from those users. Research into these and related issues is essential.
Accountability research. Research into possible legal requirements for the protection of personal information may be needed to ensure adequate accountability. The goal should be to ensure that specific individuals or agents, probably those who deploy EmNets and will use the information gained therefrom, are deemed responsible and accountable for the protection of an individual’s private information collected on those networks.18 Privacy and/or anonymity preservation techniques need to factor in accountability. Accountability, like privacy, is not absolute (Lessig, 1999). What is needed is technology to support a range of preferences, which may vary with users and contexts, for enhancing privacy, accountability, and other values.
Anonymity-preserving systems. Research in designing systems whose default policy is to preserve individual users’ anonymity is needed. It is an open question to what extent these systems would need to allow completely untraceable use rather than just strict identity protection except in the presence of authorized agents. Another possible avenue of investigation would be to enable anonymity-preserving authentication19—for example, to enable systems to determine that individuals are members of a certain group (say, doctors in a hospital) but not to allow more fine-grained identification.20
Usability refers to the effectiveness and efficiency of a system in meeting the goals and expectations of its users. All complex systems raise usability issues, and EmNets are no exception. Usability is not a single trait of a system but rather an umbrella term encompassing a number of distinct (and often conflicting) traits, including learnability, efficiency, effectiveness, and satisfaction. Moreover, these traits are not intrinsic to the system but must each be evaluated with respect to specific classes of users. For example, what is intuitive and therefore effective for a casual or beginning user may be tedious and verbose to an experienced user. Further, in the case of EmNets, it may not be accurate to refer to people who interact with them as “users” per se. Consider the case of an EmNet controlling various systems of a building; generally the EmNet will be essentially invisible to the people interacting with its features. An important distinction must also be made between users who are outside the system boundary and operators who are within the system boundary and are, in effect, essential components of the system. Users and/or others interacting with the system will usually have little formal training, whereas operators will almost always have some training because they are hired and trained specifically to operate the system. Operators, in addition, often are required to monitor the automation and take over its functions, if necessary, or to share the control function in various ways. The presence of trained operators allows the system designer to engineer specific training requirements into the system—a luxury that is not generally available in the case of end users. On the other hand, the quality of administration for many systems is very low, and it is not clear that the “users” who will insert components into EmNets are any less qualified than many of the administrators.
Usability and safety are very different—and potentially conflicting—features. Straightforward attempts to improve one negatively affect the other. For example, usability often dictates that operations carried out frequently be convenient and perceptually salient in order to maximize learnability and efficiency. But if such actions are also potentially hazardous, safety concerns may suggest that they be hidden or rendered difficult to execute by accident, for example, by requiring redundant inputs or repeated confirmation. Usability concerns, by contrast, would dictate that a user enter the data only once. One way to address this might be to devise a data encoding scheme that uses error correcting and detecting codes. This would allow detecting simple data entry errors of the sort known to be most common by humans (for example, transposition of adjacent items or missed elements) and, upon such detection, producing either nonsense or correctable states. Such design conflicts are not neces-
sarily insurmountable, as suggested above, but they are unlikely to be dealt with satisfactorily in complex real-world systems in the absence of design methodologies that explicitly give both issues their due. Such efforts are important even where safety has absolute priority over usability, since safety measures that ignore usability are far more likely to be circumvented or otherwise subverted than are those that take usability into account.
It should be noted that although complex systems tend to present more usability challenges than simpler systems, complexity per se is not the main deterrent to learnability or other aspects of usability. There are vastly complex systems (for example, the telephone network) for which high levels of usability have been achieved; and there are relatively simple devices (such as the alarm clocks found in most hotel rooms) that are consistently baffling to all but the most determined user. Usability of complex systems is maximized when (1) complexity that does not need to be exposed to the user is kept hidden and (2) when complexity that must be exposed is exposed according to an underlying cohesive, understandable, conceptual model that maximizes the predictability of the system’s behavior, supports the user’s efforts to generalize about those behaviors, and minimizes special cases and arbitrary actions.
Creating Mental Models
Mental models are a convenient concept for examining problems of usability. A mental model of a device can be thought of as an individual’s idea of the expected behavior of the system as a whole (that is, how the system works) plus information about the current system state. Thus, the mental model amounts to a user’s expectations about the behavior of the devices he or she is using. Users form mental models of systems—how they operate or are internally organized—even if they know virtually nothing about the systems. Different users will form different models of the same device; indeed, research shows that a single individual may have several (even contradictory) models of a system (Leveson, 1995; Norman, 1998). An automobile mechanic will have a much more detailed (and hopefully more accurate) model of a car than will a casual driver who has never learned how a car works. Products aimed at mass markets and untrained users must be designed with these mental models in mind to ensure easy operation and commercial success.
Users often generate a mental model for a newly encountered device by analogy to other devices perceived to be similar. In many cases, this analogy may be loose and casual. For example, a first-time user of a digital videodisk player probably will attempt to treat it like a videocassette recorder or a compact disk player. In other cases, the match between
the old and new may be quite deliberate on the part of the designer. For example, antilock brake systems (ABS) were deliberately designed to be as indistinguishable as possible from conventional braking systems. The ABS example provides an interesting illustration of the pitfalls of user-model analogies and the conflict between usability and safety. Although most users tend to think of ABS systems as exact functional replacements for conventional brakes (and new-car user manuals tend to describe them in these terms), the analogy breaks down under poor traction conditions, in which conventional systems should be pumped whereas ABS systems should not. The analogy has been drawn to enhance usability and learnability (no special training is required and the driver need not know which type of brakes the car has), but it also has led to serious accidents.
Usability may also be enhanced by designs based on standard metaphors. A familiar example is the desktop metaphor used in the design of graphical user interfaces for personal computers. In this paradigm, files and other abstractions defined by the computer’s system architecture are presented to the user as graphical metaphorical objects on the screen. These objects are imbued with certain consistent behaviors. For example, screen icons can be dragged around, they stay where they are placed, double-clicking on them opens the program, and so on. In effect, the user interface is endowed with a consistent physics more or less analogous to the physics of the real world and, to the extent that the analogy is appropriate and consistent, the user is able to apply schemata developed in dealing with real-world things to the metaphorical “things” behind the glass. It is important to realize, however, that metaphor is a means and not an end. When metaphors are clean and well chosen, they can become a powerful means of providing consistency in support of user models. But it is the consistency that ultimately has the greatest value, not the metaphor per se, and often the causes of consistency and ease of learning are better served by other techniques.
An example of a usability technique is the use of idiom in interface design (see Cooper, 1995). Idioms are design conventions that, unlike metaphors, cannot readily be guessed but rather must be learned, by either instruction or experiment. For example, many computer interfaces that use graphical interfaces require the user to double-click the mouse while the pointer is at a particular location on the screen to effect a desired action, such as opening a document. Unlike the process of dragging an icon or window to reposition it, there is nothing metaphorical about the double-clicking operation—that is, it does not obviously correspond to anything the user has encountered in the real world. Nonetheless, if implemented consistently and with proper attention to human factors
issues, the technique is easy to learn and use. In effect, this arbitrary behavior becomes an important part of the physics of the interface without ever having been part of the physics of the real world.
In designing for usability, good designers will require a grasp of the probable models that users will tend to bring to (or infer from) the device. As obvious as this may be, such understanding is difficult to achieve, in large part because designers typically know things that users do not. They are inevitably better informed about the true nature of the device than a normal user is, and designers cannot easily act as if they are typical users. Yet, this is exactly what is required to design against a user model that may be imperfect.21 There is a large literature on methods that help a designer take the user’s perspective, most notably various approaches to user studies and so-called heuristic analysis techniques (Nielson and Molich, 1990; Nielson, 1994). More work is needed on developing good conceptual models of systems.
EmNet-Specific Usability Issues
Many of the usability issues raised by EmNets are common to all complex information systems. However, there are characteristics of ubiquitous computing in general and EmNets in particular that present new and unique challenges to the usability engineer. In particular, the distributed nature of EmNets and their often intimate coupling with the physical environment represent a fundamentally new relationship between device and user. A personal computer is a thing one sits in front of and uses. How will end users think about EmNets? Probably not as “things.” They may think of them as capabilities, as smart spaces, or as properties of the built environment. They may think of them as magic. Often, they will not think of them at all. The usability of such systems will not be the sum of the usability of their component parts. It will instead be an emergent property of the behaviors of the visible nodes and their invisible counterparts, of their interactions, and of the physical environments to which they are coupled. What is the source of global coherence in a system that may be spatially distributed, incrementally designed, and implemented using heterogeneous and independently developed components? Although the existence of such system-level behavior, as a superset of the behavior of the individual components, is not new, it is nonetheless
difficult to address. What is new is that the very existence of the complex system may be unknown to the end user.22
Usability Research Topics Deserving Attention
EmNets raise interesting challenges related to the usability of systems with emergent properties. When large networks of devices are used to create smart environments, for example, the process of designing these networks to enhance usability and of ensuring helpful effective models will be complicated by the very complexity of these systems. More research is needed in the following areas:
Design for users and interaction. Approaches need to be developed for designing EmNets of increasing complexity that are usable with minimal training and without detailed knowledge of the system design or of the complex interconnections among system components. EmNets should be designed to accommodate users with varying skill levels and to accommodate the fact that they will often be invisible to the individuals interacting with them.
Appropriate conceptual models. Further study is needed on the construction of appropriate conceptual models—that is, models that describe
the critical aspects of the system and that are understandable and usable by people. Further study is also needed on developing appropriate specifications. People need to learn how to design for both novice and expert use of EmNets and for situations where the person interacting with the system is not aware of any interaction. Furthermore, attention needs to be paid to the different types of assistance that various users will need. System maintenance personnel will have a different and often deeper understanding of the system than will system operators.
Computer Science and Telecommunications Board (CSTB), National Research Council. 1999. Trust in Cyberspace. Washington, D.C.: National Academy Press.
CSTB, National Research Council. 2000a. The Digital Dilemma: Intellectual Property in the Information Age. Washington, D.C.: National Academy Press.
CSTB, National Research Council. 2000b. Summary of a Workshop on Information Technology Research for Federal Statistics. Washington, D.C.: National Academy Press.
CSTB, National Research Council. 2000c. Making IT Better: Expanding Information Technology Research to Meet Society’s Needs. Washington, D.C.; National Academy Press.
CSTB, National Research Council. 2001. The Internet’s Coming of Age. Washington, D.C.; National Academy Press.
Cooper, A. 1995. About Face: The Essentials of User Interface Design. Foster City, Calif.: IDG Books.
Fisher, David A. 1998. Design and Implementation of EASEL: A Language for Simulating Highly Distributed Systems. Pittsburgh, Pa.: Carnegie Mellon University. Available online at <http://www.sei.cmu.edu/programs/nss/design-easel.pdf>.
Friedman, B. 1999. Value-Sensitive Design: A Research Agenda for Information Technology. No. SBR-9729633. Washington, D.C.: National Science Foundation.
Gershon, Nahum. 1995. “Human information interaction,” Fourth International World Wide Web Conference, December. Boston, Mass.
Government Printing Office (GPO). Code of Federal Regulations. Title 47, Vol. 3, Parts 40 to 69, revised as of October 1, 1998. Available online at <http://frwebgate2.access.gpo.gov/cgibin/waisgate.cgi?WAISdocID=177665407+1+0+0&WAISaction=retrieve>.
Hunt, Warren. 1994. “FM8501: A verified microprocessor.” Ph.D. dissertation, LNCS 795. Heidelberg, Germany: Springer-Verlag. Abstract available online at <http://www.cli.com/hardware/fm8501.html>.
Ishii, Hiroshi, and Brygg Ullmer. 1997. Presentation at CHI 97 Conference on Human Factors in Computing Systems, March.
Lessig, Lawrence. 1999. Code and Other Laws of Cyberspace. New York: Basic Books.
Leveson, N.G. 1995. Safeware: System Safety and Computers. Reading, Mass.: Addison-Wesley.
Leveson, N.G., J.D. Reese, S. Koga, L.D. Pinnel, and S.D. Sandys. 1997. “Analyzing requirements specifications for mode confusion errors,” Workshop on Human Error, Safety, and System Development, Glasgow.
Lucas, Peter. 2000. “Pervasive information access and the rise of human-information interaction.” Proceedings of ACM CHI ‘00 Conference on Human Factors in Computing Systems. Invited session, April.
Lutz, R.R. 1993. “Analyzing software requirements errors in safety-critical embedded systems.” Proceedings of the IEEE International Symposium on Requirements Engineering, January.
Neisser, U. 1976. Cognition and Reality. San Francisco, Calif.: W.H. Freeman and Co.
Nielsen, J. 1994. “Heuristic evaluation.” Usability Inspection Methods. J. Nielsen and R.L. Mack, eds. New York: John Wiley & Sons.
Nielsen, J., and R. Molich. 1990. “Heuristic evaluation of user interfaces.” Proceedings of ACM CHI ’90 Conference on Human Factors in Computing Systems.
Norman, D.A. 1998. The Invisible Computer. Cambridge, Mass.: MIT Press.
Roth, S.F., M.C. Chuah, S. Kerpedjiev, J.A. Kolojejchick, and P. Lucas. 1997. “Towards an information visualization workspace: Combining multiple means of expression.” Human-Computer Interaction Journal 12(1 and 2):131-185.
Sarter, N.D., and D. Woods. 1995. “How in the world did I ever get into that mode? Mode error and awareness in supervisory control.” Human Factors (37) 5-19.
Schneider, Fred B. 1993. “What good are models and what models are good.” Distributed Systems, S. Mullender, ed. Reading, Mass.: Addison-Wesley.
Thibodeau, Patrick. 2000. “‘Huge’ privacy questions loom as wireless use grows.” Computerworld, December 18.
Tognazzini, Bruce. 1992. Tog on Interface. Reading, Mass.: Addison-Wesley.
Wiener, Earl L., and Renwick E. Curry. 1980. “Flight-deck automation: Promises and problems.” Ergonomics 23(10):995-1011.
Card, S.K., T.P. Moran, and A. Newell. 1980. “Computer text-editing: An information processing analysis of a routine cognitive skill.” Cognitive Psychology 12:32-74.
Card, S.K., T.P. Moran, and A. Newell. 1983. The Psychology of Human-Computer Interaction. Hillsdale, N.J.: Lawrence Erlbaum Associates.
Fowler, M., and K. Scott. 1997. UML Distilled: Applying the Standard Object Modeling Language. Reading, Mass.: Addison-Wesley.
Gray, W.D., B.E. John, and M.E. Atwood. 1993. “Project Ernestine: Validating a GOMS Analysis for Predicting and Explaining Real-World Task Performance.” Human-Computer Interaction 8(3):237-309.
Kieras, D., and P.G. Polson. 1985. “An approach to the formal analysis of user complexity.” International Journal of Man-Machine Studies 22:365-394.
Minsky, M. 1974. “A framework for representing knowledge.” MIT-AI Laboratory Memo 306. (Shorter version in Readings in Cognitive Science, Allan Collins and Edward E. Smith, eds., San Mateo, Calif.: Morgan-Kaufmann, 1992.)
Perrow, C. 1984. Normal Accidents: Living with High-Risk Technology. New York: Basic Books.
Schank, R., and R. Abelson, 1977. Scripts, Plans, Goals and Understanding. Hillsdale, N.J.: Erlbaum Associates.