Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 119
Embedded, Everywhere: A Research Agenda for Networked Systems of Embedded Computers 4 Building Trustworthy Networked Systems of Embedded Computers Users of networked systems of embedded computers (EmNets) will demand certain characteristics, including reliability, safety, security, privacy, and ease of use (usability). These features can be encapsulated in the term “trustworthiness.”1 Such features must be built into a system from the start; it is difficult, if not impossible, to add them in an adequate and cost-effective manner later on. A large challenge to adding these sorts of features to EmNets is the combination of an open system architecture with distributed control. The need for high reliability in almost all EmNets is obvious, but how to ensure it is less obvious. Today’s techniques for designing reliable systems require knowledge of all components of a system—knowledge that cannot be ensured in the rapidly changing environments in which EmNets will be used. Testing mechanisms that apply to standard networks of computing devices may well fail to apply in the context of EmNets, where components may shut down to conserve power or may be limited in computing power or available bandwidth. These and other reliability questions will need to be studied if EmNets of the future are to be trusted. Some EmNets may operate unattended and be used to control dangerous devices or systems that, through either normal or flawed opera- 1 For an in-depth treatment of trustworthy networked information systems that incorporates many of these aspects, see CSTB (1999).
OCR for page 120
Embedded, Everywhere: A Research Agenda for Networked Systems of Embedded Computers tion, could lead to significant human, economic, or mission losses. Similar problems were encountered early on in manufacturing automation; here the systems are potentially larger, certainly more distributed, and operate in much less controlled environments. The constraints on EmNets—including long lifetimes, changes in constituent parts, and resource limitations—strain existing methods for evaluating and ensuring system safety. In addition, many EmNets will be operated—and perhaps even configured—by end users with little technical training. New designs may be needed that allow untrained users to operate these systems safely and effectively. Accidents related to software already are starting to increase in proportion to the growing use of software to control potentially dangerous systems (Leveson, 1995). Networking embedded systems together, as envisioned for many new applications, will only add to these problems by enabling a larger number of potentially more complex interactions among components—interactions that cannot be anticipated or properly addressed by system users. New system and software engineering frameworks are needed to deal with these problems and enhance the safety of EmNets. Security and privacy will also be required in many systems. The amount of information that can be collected by EmNets is staggering, the variety is wide, and the potential for misuse is significant. Capabilities are needed to verify that the information cannot be compromised or used by those who have no right to it and/or to cope with the likelihood that misuse or other problems are going to occur. In addition, these systems will need to be protected from tampering and attacks mounted from outside the system. New networking technologies will introduce the potential for new types of attacks. Security can help with elements of reliability and safety as well since it involves not only satisfying objectives but also incorporates protective mechanisms. Finally, EmNets need to be usable. The systems must be easy to learn, easy to use, and amenable to understanding, often at different levels of detail by different types of users. As these systems become more complex and open to more varieties of computer-mediated interaction, they need to be designed in such a way that end users and operators understand what a system is doing. Systems that violate users’ expectations lead to frustration at best and errors at worst; it will be important to keep user expectations in mind in design decisions as these systems become more complex and pervasive. In addition, many of these systems will not be directly used by individuals—rather, individuals will interact with EmNets in various contexts, often without realizing it. Understanding how such interactions will take place and what people’s conscious and even subconscious expectations might be is an additional challenge for usability design in EmNets.
OCR for page 121
Embedded, Everywhere: A Research Agenda for Networked Systems of Embedded Computers The unique constraints on EmNets raise additional concerns; this chapter discusses the challenges inherent in designing EmNets to be reliable, safe, secure, private, and usable, and suggests the research needed to meet these challenges. RELIABILITY Reliability is the likelihood that a system will satisfy its behavioral specification under a given set of conditions and within a defined time period. The failure of a particular component to function at all is only one form of unreliability; other forms may result when components function in a way that violates the specified behavior (requirements). Indeed, a component that simply stops functioning is often the simplest to deal with, because such failure can be detected easily (by the other components or the user) and, often, isolated from the rest of the system. Far more difficult failure cases are those in which a component sends faulty information or instructions to other parts of the networked system (examples of so-called Byzantine faults); such a failure can contaminate all components, even those that (by themselves) are functioning normally. Systems need to be designed with great care to address the expected failures. Because EmNets will often be unattended or operated by nonexpert users, operator intervention cannot be relied upon to handle most failures. Current failure models for distributed systems revolve around the ways in which individual components or communications infrastructure can fail (Schneider, 1993). Fault-tolerant designs of such systems generally assume that only a small number of failures of any type will occur. It is not at all clear that these models apply to EmNets, in which the individual components are assumed to be easily and inexpensively replaceable, and the usual mechanisms for detecting faults (such as a request for a keep-alive message) may be prohibitively expensive in terms of power or bandwidth or may generate false failure notifications (in the case of components that shut down occasionally to conserve power.) The development of techniques for fault-tolerant designs of systems in which the individual components are resource-bound and easily replaceable is an area ripe for investigation. Nor are current techniques for verifying the reliability of design implementations readily applicable to EmNets. While significant work on the hardware verification of nontrivial systems dates back to at least the mid-1980s (see, for example, Hunt’s work on the FM8501 microprocessor (Hunt, 1994)), it is more appropriate for individual components and may not be applicable to EmNets. Each component, to be reliable, must correspond to its specification, and the overall system will be considered reliable if it adheres to the system specification. Experience has shown,
OCR for page 122
Embedded, Everywhere: A Research Agenda for Networked Systems of Embedded Computers however, that merely confirming the reliability of individual components of a system is insufficient for understanding the behavior of the overall system. Existing methods for ensuring reliability are tied to tests of system implementations against the appropriate specification. It should be noted that testing traditionally occurs after design and implementation. While testing and validating complex designs after the fact tends to have more appeal than building in reliability and assurance from the beginning (which calls for greater rigor and costs more), it is an extremely difficult task that already consumes a large fraction of the overall expense, schedule, and labor of an engineering project. Microprocessor design teams typically allocate one validation person for every two designers, and the trend is toward parity with future designs. Many software projects report deploying one validation person for every software writer. Companies are investing heavily in testing because (1) shorter product development schedules no longer permit a small set of testers to work on a project for a long time, (2) the overall complexity of the designs is making it increasingly difficult to achieve the product quality necessary for introducing a new product, and (3) the volumes of product being shipped today make the possible expense of product recalls intolerable to most companies. “If you didn’t test it, it doesn’t work” is a general validation philosophy that serves many hardware or software design projects well. The idea is that unless the designer has anticipated the many ways in which a product will be used and the validation team has tested them comprehensively, then any uses that were overlooked will be the first avenues of failure. But the problem is not as simple as listing the product’s features and checking them one by one (although that is indeed one aspect of normal validation). Design flaws that manifest themselves that simply are usually easy to detect. The more insidious product design flaws appear only when multiple product features are combined or exercised in unusual ways. The complexity of such situations hampers efforts to detect flaws in advance. For EmNets, the challenge of testing every system feature against every possible real-world usage will be daunting, even for an accurately configured system in initial deployment. But what happens a few months later when the system owner begins to extend the system in ad hoc ways, perhaps upgrading some nodes and adding others supplied by another vendor? The central challenge to EmNet reliability is to extend today’s tools and validation methods—for example, the Willow project on survivable systems2 and Easel (Fisher, 1999), a simulator for modeling 2 For more information, see <http://www.cs.colorado.edu/serl/its/>.
OCR for page 123
Embedded, Everywhere: A Research Agenda for Networked Systems of Embedded Computers unbounded systems,3 may offer insights—to the much more difficult scope of large-scale EmNets. Reliability Research Topics Deserving Attention The following research topics deserve attention: Fault models and recovery techniques for EmNets that take into account their scale, long life, open architecture, distributed control aspects, and the replaceability of their components. Appropriate models of failure and how to deal with failures in systems that are distributed and have the scale, longevity, openness, and component characteristics of EmNets have yet to be investigated. Until such investigations take place it will be difficult to design reliable systems, much less test implementations of those designs. Such research should be linked to research into the computational models appropriate for such systems (see Chapter 5). EmNet monitoring and performance-checking facilities. Over the past several decades, considerable research has gone into monitoring and system health management, but EmNets pose unique problems owing to their potential scale and reconfigurability and the scarcity of component energy. Verification of EmNets’ correctness and reliability. The size and distributed nature of EmNets may preclude complete system testing outside of simulation. Advances in analysis and simulation techniques would increase confidence in cases where complete testing is virtually impossible before the system is used in the field.4 SAFETY Safety refers to the ability of a system to operate without causing an accident or an unacceptable loss.5 Many EmNets (for example, a home entertainment system) will not present significant safety problems even if they fail, although such failures might frustrate or inconvenience users. Other failures may raise significant safety issues. Safety and reliability do not necessarily go hand in hand. An unreliable system or component is not necessarily unsafe (for example, it may 3 For more information, see <http://www.cert.org/easel/easel_foundations.html>. 4 See Making IT Better (CSTB, 2000c) for a discussion of the limitations of the simulation of complex systems today. 5 “Accident” is not an engineering term; it is defined by society. In the aviation community, for example, the term “accident” is used to refer to the loss of the hull of an aircraft; anything else is considered an incident, even though human life may be at risk.
OCR for page 124
Embedded, Everywhere: A Research Agenda for Networked Systems of Embedded Computers always fail into a safe state or an erroneous software output may not cause the system to enter an unsafe state, or a system that stops working may even decrease safety risks), whereas a highly reliable system may be unsafe (for example, the specified behavior may be unsafe or incomplete, or the system may perform unintended functions). Therefore, simply increasing the reliability of the software or system may have no effect on safety and, in some systems, may actually reduce safety. Reliability is defined in terms of conformance with a specification; accidents usually result from incorrect specifications. Whether viewed as a constraint on, or a requirement of, the system design, safety concerns limit the acceptable design space. Like the other desirable characteristics addressed in this chapter, safety cannot effectively be added onto a completed design, nor can it be tested or measured “into” a design. Safety constraints need to be identified early on in the design process so that the system can be designed to satisfy them. Testing and measurement simply provide assurance on how effectively the design incorporates already-specified safety considerations. Engineers have developed a range of techniques for ensuring system safety, many of which have been extended to systems with embedded computers; however, much more research is needed (Leveson, 1995) in this area, which has attracted comparatively little attention by computer science researchers. In system safety engineering, safety efforts start early in the concept development stage. The process involves identifying system hazards (i.e., system states that can lead to accidents or unacceptable losses), using them as the basis for writing system safety requirements and constraints, designing the system to eliminate the hazards and their effects, tracing any residual safety-related requirements and constraints that cannot be eliminated at the system level down to requirements and constraints on the behavior of individual system components (including software), and verifying that the efforts were successful. EmNets introduce added difficulties to this process. They greatly increase the number of states and behaviors that must be considered and the complexity of the interactions among potentially large numbers of interconnected components. Although all large digital systems experience similar problems, EmNets are unusual in that many operate in real time and with limited direct human intervention. Often they are either unattended or managed by human operators who lack technical skills or are untrained. Furthermore, EmNets afford the possibility of more dynamic configuration than do many other types of systems. Many EmNets are likely to arise from ad hoc extensions of existing systems or from several systems tied together in ways unanticipated by the original designers. Historically, many accidents have been attributed to operator error.
OCR for page 125
Embedded, Everywhere: A Research Agenda for Networked Systems of Embedded Computers Indeed, a common reason for automating safety-critical systems (apart from increasing efficiency) is to eliminate operator error. Automation has done this, but it has also created a new type of error, sometimes called technology-induced human error. Many of these new errors are the result of what human factors experts have labeled technology-centered automation, whereby designers focus most of their attention on the mapping from software inputs to outputs, mathematical models of required functionality, and the technical details and problems internal to the computer. Little attention is usually given to evaluating software in terms of whether it provides transparent and consistent behavior that supports users in their monitoring and control tasks. Research on various types of system monitoring, including hierarchical monitoring and standards thereof, may prove useful here. Without the kind of support mentioned previously, technology-centered automation has changed the reasons for accidents and the types of human error involved. Humans have not been eliminated from most high-tech systems, but their role has changed significantly: Often, they are monitors or high-level managers of the automation, which directly controls the system. On modern fly-by-wire aircraft, for example, all pilot commands to move the control surfaces go through a computer—there are no direct mechanical linkages. Automation designs seldom support the new roles humans are playing. And yet, when the inevitable human error results from what aircraft human factors experts have called clumsy automation (Wiener and Curry, 1980), the accident is blamed on the human rather than the system or automation design. All of the recent Airbus accidents and some of the recent Boeing accidents involved pilot confusion arising from the design of the automation (Leveson et al., 1997). Examples include mode confusion and the lack of situational awareness (both related to inadequate feedback, among other things), increased pilot workload during emergencies and high stress periods, automation and pilots fighting over control of the aircraft, increased amounts of typing, and pilot distraction. Human factors experts have tried to overcome clumsy automation by changing the human interface to the automation, changing user training, or designing new operational procedures to eliminate the new human errors resulting from poor automation design. These efforts have had limited success. Some have concluded that “training cannot and should not be the fix for bad design” (Sarter and Woods, 1995) and have called for more human-centered automation. Currently, however, coping mechanisms are required until such automation becomes more widespread. If researchers can identify the automation features that lead to human error, they should be able to design the software in such a way that errors are reduced without sacrificing the goals of computer use, such as in-
OCR for page 126
Embedded, Everywhere: A Research Agenda for Networked Systems of Embedded Computers creased productivity and efficiency. EmNets complicate the process of error reduction simply because of their increased complexity and the opacity of system design and operation. Today what can be automated easily is automated, leaving the rest for human beings. Often this causes the less critical aspect of performance to be automated, leaving to humans the more critical aspects. Worse, the systems often fail just when they are most needed—when conditions are complex and dangerous, when there are multiple failures, or when the situation is unknown. Unfortunately, if the routine has been automated, the human controller has been out of the loop, so that when the automated systems fail, it takes time for the human operator to regain a sense of the state, time that may not be available. EmNets increase the likelihood that human intervention will not be readily available. Approaches to automation should be changed from doing what is relatively easily achievable to doing what is most needed by human operators and other people affected by system behavior. This principle is, of course, applicable to more than just EmNets. The solution will need to incorporate the economic and institutional contexts as well as the technology. Safety Research Topics Deserving Attention Widespread use of EmNets will compound the existing challenges involved in designing safety into systems. These challenges will need to be addressed quickly to avoid future problems and to ensure that the potential of EmNets is effectively tapped. To address problems of safety in EmNets adequately, greatly expanded research will be needed in a number of areas, including the following: Designing for safety. Safety must be designed into a system, including the human-computer interface and interaction. New design techniques will be required to enforce adherence to system safety constraints in EmNet behavior and eliminate or minimize critical user errors. In addition, designers often make claims about the independence of components and their failure modes to simplify the design process and make systems more amenable to analysis, but they lack adequate tools and methodologies for ensuring independence or generating alerts about unknown interdependencies. The system itself, or the design tools, will need to provide support for such capabilities. This may well require changes in the way computer scientists approach these sorts of problems as well as collaboration with and learning from others, such as systems engineers, who have addressed these issues in different domains. Hazard analysis for EmNets. The deficiencies in existing hazard analysis techniques when applied to EmNets need to be identified. De-
OCR for page 127
Embedded, Everywhere: A Research Agenda for Networked Systems of Embedded Computers signers and implementers of EmNet technology who may not necessarily be familiar with such techniques will need to understand them. Hazard analysis usually requires searching for potential sources of hazards through large system state spaces; EmNets will complicate this search process for the reasons already discussed. The results of hazard analysis are critical to the process of designing for safety and verifying that the designed and implemented system is safe. Validating requirements. Most accidents related to software stem from requirements flaws—incorrect assumptions about the required behavior of the software and the operational environment. In almost all accidents involving computer-controlled systems, the software performed according to specification but the specified behavior was unsafe (Leveson, 1995; Lutz, 1993). Improved specification and analysis techniques are needed to deal with the challenges posed by EmNets. These techniques should take into account that user needs and therefore specifications will evolve. Verifying safety. In regulated industries, and even in unregulated ones in which liability or costly recalls are a concern, special procedures are required to provide evidence that fielded systems will exhibit adequate levels of safety. EmNets greatly complicate the quest for such assurance, and new approaches will be needed as the complexity and potential number and variety of potential failure modes or hazardous system behaviors increase. Ensuring safety in upgraded software. Even if the software is designed and assured to be safe in the original system context, software can be expected to change continually throughout the life of a system as new functionality is added and bugs are fixed. Each change will require assurances that safety has not been compromised, but because it will not be practical to redo a complete software system safety analysis for every change, new techniques will be needed to minimize the amount of effort required to verify safety when potential system and software design changes are proposed and to cope with the consequences of safety failures. Users can be expected to extend the system in ways unanticipated in the original design, adding new components, trying out new functions, and so on.6 In addition, the system and software design may become unsafe if there are unanticipated changes in the environment in which the 6 Further complicating the situation is the fact that backup safety features, meant to be invoked only in emergencies, are often discovered by human operators and used as primary resources. Thus, if the system automatically detects a human error and produces an automatic correction, the human will soon learn always to make the error; oftentimes it is easier to do the task wrong and let the system correct it than to go through the laborious act of getting it right.
OCR for page 128
Embedded, Everywhere: A Research Agenda for Networked Systems of Embedded Computers software is operating (a likely occurrence in a battlefield situation, for example). Methods are needed to audit the physical components of the system and the environment (including system operators) to determine whether the changes violate the assumptions underlying the hazard analysis. Approaches to software upgrades must address safety concerns in hardware components, too (for example, component audits could include calls to hardware components to validate their IDs). SECURITY Security relates to the capability to control access to information and system resources so that they cannot be used or altered by those lacking proper credentials. In the context of EmNets, security relates to controlled access to the subnetworks, the information stores, the devices that are interconnected, and the computing and communication resources of a given network. Many of the research issues that were raised with respect to safety in EmNets also apply to security. In addition, security analysis needs to assume that an adversary is actively trying to abuse, break, or steal from the system (an assumption not usually made for safety analysis.) Security can be difficult to achieve in information systems of all types, but will perhaps be especially so in EmNets. Not only will the deployment of EmNets containing various sensor technologies allow the physical world to become more tightly interconnected with the virtual world, but the networking of embedded computers will also tend to increase the vulnerability of these systems by expanding the number of possible points of failure, tampering, or attack, making security analysis more difficult. The range of products into which processing and networking capabilities may be embedded will greatly expand the number of nodes at which security will need to be explicitly considered and influence the expectations at each node. Many of these nodes will consist of presumably ordinary everyday devices in which security is not currently a concern (thermostats, audio equipment, and so on); however, mischief will become an increasing risk factor. Their close connection to the physical world and interconnection with larger networks accessible by more people with unknown motives will make lapses of security potentially more damaging, increasing the risks associated with EmNets. In a military context, of course, the compromise of even fairly prosaic devices (such as food storage equipment or asset monitoring systems) that are part of a larger EmNet could have serious security implications. EmNets’ configurations will be much more dynamic, even fluid, than typical networked systems. EmNet user interaction models may be quite different from those in traditional networks. These properties have sig-
OCR for page 129
Embedded, Everywhere: A Research Agenda for Networked Systems of Embedded Computers nificant impact on security (and privacy). For example, as one moves from place to place, one’s personal area network may diffuse into other networks, such as might happen in a battlespace environment. Interactivity may not be under an individual’s direct control, and the individual may not understand the nature of the interactivity. Various nodes will engage in discovery protocols with entities in contexts they have never encountered before. Some EmNets may be homogeneous and their connectivity with other networks may be straightforward. In such cases, traditional network security techniques will suffice, with policy and protection methods executing in a gateway device. In heterogeneous, diffuse, fluid networks, traditional network security methods will not be effective. Rather, trust management and security policies and methods will be the responsibility of individual nodes and applications. This may put demands on the operating system (if any) that runs on those individual nodes. They may need to distinguish between secure operating modes and more permissive modes (especially during discovery, configuration, and update procedures). Protecting System Boundaries A key problem is how to protect the network from outside attack. The physical world has a number of well-understood and easily recognizable protective barriers and security structures. Retail stores, for example, have a physical structure to protect valuables. Even though these stores are open to the public, shoplifters can be thwarted by a well-defined notion of inside and outside and sensors used to overcome attempts to conceal goods. Such approaches have few analogues in the virtual world. Further, in the case of shoplifting, a risk management calculation is performed: that is, some level of security breach (shrinkage) is acceptable to merchants because absolute security would be unacceptable to customers. Risk management is also required for EmNets; however, calculating the risk is extremely challenging and variable because there are so many unknowns in these systems. The physical isolation of a network, together with extremely rigid and secure protocols for attaching terminals, is the only highly reliable method for protecting networked information systems from external threats (that is, attacks from outside hackers and others without access privileges), but this approach is not viable in many systems that need to be interconnected to be useful. In EmNets, physical boundaries and remoteness are effectively erased by the presence of sensors and network connectivity, and notions of entry and exit begin to fade. Except in physically isolated networks, the concepts of inside and outside generally do not exist. Yet this is one way in which users, and even designers, think about security problems—a mindset that, in itself, is
OCR for page 136
Embedded, Everywhere: A Research Agenda for Networked Systems of Embedded Computers online profiling has demonstrated, however, notification is not a simple process. Many questions need to be answered. When should notification be mandatory? How can users be effectively signaled? Given individual differences in sensitivity and awareness, it may be difficult to provide adequate notification to some without annoying others. This may especially be the case in smart spaces, where all sorts of information may be collected and possibly linked to an individual. More research is needed to address issues like these. Additional means may also be needed to control the disclosure of information. The issue of disclosure arises when information is collected for one purpose but used for other purposes (often referred to as mission creep). Disclosure is often provided in privacy policies for Web sites, but EmNets often involve more passive interactions in which disclosure is less convenient. For example, a smart space may collect information about an individual and provide it to others with the intention of providing a useful service, but the individual being probed may not be appreciative. Are there techniques that would allow users to control the flows of information about them? How can a user answer questions such as, Where is my information? Who has it? How did it get there? Who is responsible if something goes wrong? In addition, What conditions are needed so that users trust others not to misuse their data, and can EmNets be designed to engender an atmosphere of trust that is not due solely to ignorance of their existence in a given situation? Considerable work has begun on technologies that allow consumers to express privacy preferences14 and purveyors of intellectual property to control the dissemination of their work.15 However, these approaches are being developed in the context of Web-based electronic commerce; whether or not they are extendable to a broader set of EmNet-based applications is unclear. It would seem to be very difficult for anyone to avoid giving up personal information to these networks. There are risks even when everyone’s intentions are well understood. It would be useful to have some general principles whereby the risk of inadvertent privacy violation can be minimized. These might include disposing of information as soon as possible after it is used; storing information near the point of use; and avoiding the removal of such data from local control whenever possible. Use of anonymity or pseudonymity and of protocols that prevent the linking of data sets could also be considered. 14 For example, see the Platform for Privacy Preferences Project (P3P) at <http://www.w3.org/P3P/>. 15 See Chapter 5 of CSTB (2000a), a report on intellectual property in the information age.
OCR for page 137
Embedded, Everywhere: A Research Agenda for Networked Systems of Embedded Computers The fundamental issue is the ability of individuals to control the collection and dissemination of information about them in an environment in which daily transactions and events—and the events associated with their personal environment—involve EmNets or are controlled or monitored by them. Research is needed to better understand people’s expectations about their rights and abilities to exercise such control and resist intrusion. What are the expectations about privacy, and how are they evolving as people become more exposed to and familiar with various technologies? Can one outline the privacy rights that people either expect or legally possess, and can one identify ways in which different types of EmNets threaten those rights and run counter to those expectations? Conversely, as EmNets become ubiquitous, are there ways to use the technology to defend privacy rights, or will privacy necessarily be lost? As the FTC has recognized (Thibodeau, 2000), many privacy questions will need to be rethought in a world of increasing automation and instantaneous wireless communication. Both privacy expectations and case law are evolving. It will be necessary to clearly understand the trade-offs involved. EmNets have more of a propensity to be ubiquitous and enveloping, unavoidable in our environment, where individuals are not in control of their interaction. In these cases, privacy issues cannot be addressed by education and personal policies alone. Rather, they become (even more) a matter of public policy.16 Privacy As Related to Security While security and privacy are very distinct properties, they are related (for example, security can provide mechanisms with which to protect privacy). Privacy is often said to involve the right or desire to be left alone. In the context of EmNets, it more often has to do with the right or intention of a person to keep certain personal information confidential. A breach of security may result in breach of privacy by someone without proper credentials who gains access to private information; a breach of privacy may also occur when information that is freely shared over a network is abused or when EmNets are deployed into various environments without notification, consent, or full disclosure. Breaches of security may also involve the dissemination, through an EmNet, of information that is intended to be shared for a narrow purpose but is used nonetheless for broader purposes because of an inability to precisely con- 16 CSTB anticipates a policy-oriented study on privacy in the information age to begin sometime in 2001. In addition, Chapter 5 of the CSTB report The Internet’s Coming of Age (CSTB, 2001) examines implications for broad public policy, including issues related to privacy and anonymity on the Internet.
OCR for page 138
Embedded, Everywhere: A Research Agenda for Networked Systems of Embedded Computers trol data flows or the use of information collected for one purpose for a completely different purpose. Security and privacy are related for another reason, too: both may be studied and understood in a given context by analyzing threats and risks. The security threats to a given network can be catalogued; countermeasures for those threats specified; and then residual risks of failure, oversight, and inadequacy identified. Similarly, the threats to privacy from the deployment or specific use of EmNets may be catalogued, means for protecting and preserving privacy specified, and the residual risks analyzed and managed. Privacy issues may be somewhat more challenging to deal with than security issues because they entail varying expectations and values and because access control practices often call for conveying personal information. Privacy seems far more malleable than security, because what counts as private is socially negotiated; privacy violations may occur when individuals have different understandings about the boundaries and contexts of privacy (this will be especially true with new technologies and where the technology moves information across multiple social contexts). Expectations are in flux, as the Internet is demonstrating that there is less privacy than may once have been assumed. Further, people differ with respect to the types of information they wish to keep private, the conditions under which they might allow access to different sorts of information (for example, health records, financial information, and online purchases), and the degree to which they value privacy. Privacy Research Topics Deserving Attention While the privacy issues discussed above raise many public policy questions, they also raise several technical research issues that need to be addressed. Both the policy and technical issues demand much additional research, but this research need not be EmNet-specific. In addition, while many of the policy and technical issues may not be directly applicable to defense and military situations, the need in such situations for identification (for example, friend or foe?) and for need-to-know classification of information make some of these points relevant. Privacy has largely been dealt with by advocacy, legal, and political processes; however, it will increasingly involve and require technical mechanisms and contextualizations. The committee strongly encourages additional research in the many policy issues surrounding privacy and makes the following recommendations with respect to technical concerns: Flexible policy management. EmNets, and indeed all information systems, do implement some form of privacy policies. Often, however, this is by default not by design. Research is needed to develop a calculus
OCR for page 139
Embedded, Everywhere: A Research Agenda for Networked Systems of Embedded Computers of privacy17 and ways to enable flexible, configurable privacy policies in systems so that as external situations or policies change, the system can be easily adjusted to reflect that. Systems should be designed to allow incorporating a wide range of potential privacy policies. Informed consent. Implementing informed consent in technological systems is a difficult challenge. EmNets seem likely to make this problem that much harder. Owing to the passive and ubiquitous nature of many of these systems, users will often not be aware that information about them is being gathered. Notifying users who may not even be aware of the existence of the EmNet is a difficult problem. Even more difficult is acquiring meaningful informed consent from those users. Research into these and related issues is essential. Accountability research. Research into possible legal requirements for the protection of personal information may be needed to ensure adequate accountability. The goal should be to ensure that specific individuals or agents, probably those who deploy EmNets and will use the information gained therefrom, are deemed responsible and accountable for the protection of an individual’s private information collected on those networks.18 Privacy and/or anonymity preservation techniques need to factor in accountability. Accountability, like privacy, is not absolute (Lessig, 1999). What is needed is technology to support a range of preferences, which may vary with users and contexts, for enhancing privacy, accountability, and other values. Anonymity-preserving systems. Research in designing systems whose default policy is to preserve individual users’ anonymity is needed. It is an open question to what extent these systems would need to allow completely untraceable use rather than just strict identity protection except in the presence of authorized agents. Another possible avenue of investigation would be to enable anonymity-preserving authentication19—for example, to enable systems to determine that individuals are members of a certain group (say, doctors in a hospital) but not to allow more fine-grained identification.20 17 A calculus of privacy can be thought of as a method of analysis, reasoning, or calculation that takes into account the many factors relevant to privacy (people’s expectations, the characteristics of disclosed information, ease of access, etc.) and the relationships among them. 18 P3P can be seen as the early stages of a technology that gives people more control over their data and provides information about how Web sites handle personal information. 19 Another CSTB committee is currently investigating authentication technologies and their privacy implications. 20 CSTB’s report Summary of a Workshop on Information Technology Research for Federal Statistics (CSTB, 2000b) has a section on limiting disclosure, which addresses some of the inherent difficulties in protecting identities in the face of extramural information.
OCR for page 140
Embedded, Everywhere: A Research Agenda for Networked Systems of Embedded Computers USABILITY Usability refers to the effectiveness and efficiency of a system in meeting the goals and expectations of its users. All complex systems raise usability issues, and EmNets are no exception. Usability is not a single trait of a system but rather an umbrella term encompassing a number of distinct (and often conflicting) traits, including learnability, efficiency, effectiveness, and satisfaction. Moreover, these traits are not intrinsic to the system but must each be evaluated with respect to specific classes of users. For example, what is intuitive and therefore effective for a casual or beginning user may be tedious and verbose to an experienced user. Further, in the case of EmNets, it may not be accurate to refer to people who interact with them as “users” per se. Consider the case of an EmNet controlling various systems of a building; generally the EmNet will be essentially invisible to the people interacting with its features. An important distinction must also be made between users who are outside the system boundary and operators who are within the system boundary and are, in effect, essential components of the system. Users and/or others interacting with the system will usually have little formal training, whereas operators will almost always have some training because they are hired and trained specifically to operate the system. Operators, in addition, often are required to monitor the automation and take over its functions, if necessary, or to share the control function in various ways. The presence of trained operators allows the system designer to engineer specific training requirements into the system—a luxury that is not generally available in the case of end users. On the other hand, the quality of administration for many systems is very low, and it is not clear that the “users” who will insert components into EmNets are any less qualified than many of the administrators. Usability and safety are very different—and potentially conflicting—features. Straightforward attempts to improve one negatively affect the other. For example, usability often dictates that operations carried out frequently be convenient and perceptually salient in order to maximize learnability and efficiency. But if such actions are also potentially hazardous, safety concerns may suggest that they be hidden or rendered difficult to execute by accident, for example, by requiring redundant inputs or repeated confirmation. Usability concerns, by contrast, would dictate that a user enter the data only once. One way to address this might be to devise a data encoding scheme that uses error correcting and detecting codes. This would allow detecting simple data entry errors of the sort known to be most common by humans (for example, transposition of adjacent items or missed elements) and, upon such detection, producing either nonsense or correctable states. Such design conflicts are not neces-
OCR for page 141
Embedded, Everywhere: A Research Agenda for Networked Systems of Embedded Computers sarily insurmountable, as suggested above, but they are unlikely to be dealt with satisfactorily in complex real-world systems in the absence of design methodologies that explicitly give both issues their due. Such efforts are important even where safety has absolute priority over usability, since safety measures that ignore usability are far more likely to be circumvented or otherwise subverted than are those that take usability into account. It should be noted that although complex systems tend to present more usability challenges than simpler systems, complexity per se is not the main deterrent to learnability or other aspects of usability. There are vastly complex systems (for example, the telephone network) for which high levels of usability have been achieved; and there are relatively simple devices (such as the alarm clocks found in most hotel rooms) that are consistently baffling to all but the most determined user. Usability of complex systems is maximized when (1) complexity that does not need to be exposed to the user is kept hidden and (2) when complexity that must be exposed is exposed according to an underlying cohesive, understandable, conceptual model that maximizes the predictability of the system’s behavior, supports the user’s efforts to generalize about those behaviors, and minimizes special cases and arbitrary actions. Creating Mental Models Mental models are a convenient concept for examining problems of usability. A mental model of a device can be thought of as an individual’s idea of the expected behavior of the system as a whole (that is, how the system works) plus information about the current system state. Thus, the mental model amounts to a user’s expectations about the behavior of the devices he or she is using. Users form mental models of systems—how they operate or are internally organized—even if they know virtually nothing about the systems. Different users will form different models of the same device; indeed, research shows that a single individual may have several (even contradictory) models of a system (Leveson, 1995; Norman, 1998). An automobile mechanic will have a much more detailed (and hopefully more accurate) model of a car than will a casual driver who has never learned how a car works. Products aimed at mass markets and untrained users must be designed with these mental models in mind to ensure easy operation and commercial success. Users often generate a mental model for a newly encountered device by analogy to other devices perceived to be similar. In many cases, this analogy may be loose and casual. For example, a first-time user of a digital videodisk player probably will attempt to treat it like a videocassette recorder or a compact disk player. In other cases, the match between
OCR for page 142
Embedded, Everywhere: A Research Agenda for Networked Systems of Embedded Computers the old and new may be quite deliberate on the part of the designer. For example, antilock brake systems (ABS) were deliberately designed to be as indistinguishable as possible from conventional braking systems. The ABS example provides an interesting illustration of the pitfalls of user-model analogies and the conflict between usability and safety. Although most users tend to think of ABS systems as exact functional replacements for conventional brakes (and new-car user manuals tend to describe them in these terms), the analogy breaks down under poor traction conditions, in which conventional systems should be pumped whereas ABS systems should not. The analogy has been drawn to enhance usability and learnability (no special training is required and the driver need not know which type of brakes the car has), but it also has led to serious accidents. Usability may also be enhanced by designs based on standard metaphors. A familiar example is the desktop metaphor used in the design of graphical user interfaces for personal computers. In this paradigm, files and other abstractions defined by the computer’s system architecture are presented to the user as graphical metaphorical objects on the screen. These objects are imbued with certain consistent behaviors. For example, screen icons can be dragged around, they stay where they are placed, double-clicking on them opens the program, and so on. In effect, the user interface is endowed with a consistent physics more or less analogous to the physics of the real world and, to the extent that the analogy is appropriate and consistent, the user is able to apply schemata developed in dealing with real-world things to the metaphorical “things” behind the glass. It is important to realize, however, that metaphor is a means and not an end. When metaphors are clean and well chosen, they can become a powerful means of providing consistency in support of user models. But it is the consistency that ultimately has the greatest value, not the metaphor per se, and often the causes of consistency and ease of learning are better served by other techniques. An example of a usability technique is the use of idiom in interface design (see Cooper, 1995). Idioms are design conventions that, unlike metaphors, cannot readily be guessed but rather must be learned, by either instruction or experiment. For example, many computer interfaces that use graphical interfaces require the user to double-click the mouse while the pointer is at a particular location on the screen to effect a desired action, such as opening a document. Unlike the process of dragging an icon or window to reposition it, there is nothing metaphorical about the double-clicking operation—that is, it does not obviously correspond to anything the user has encountered in the real world. Nonetheless, if implemented consistently and with proper attention to human factors
OCR for page 143
Embedded, Everywhere: A Research Agenda for Networked Systems of Embedded Computers issues, the technique is easy to learn and use. In effect, this arbitrary behavior becomes an important part of the physics of the interface without ever having been part of the physics of the real world. In designing for usability, good designers will require a grasp of the probable models that users will tend to bring to (or infer from) the device. As obvious as this may be, such understanding is difficult to achieve, in large part because designers typically know things that users do not. They are inevitably better informed about the true nature of the device than a normal user is, and designers cannot easily act as if they are typical users. Yet, this is exactly what is required to design against a user model that may be imperfect.21 There is a large literature on methods that help a designer take the user’s perspective, most notably various approaches to user studies and so-called heuristic analysis techniques (Nielson and Molich, 1990; Nielson, 1994). More work is needed on developing good conceptual models of systems. EmNet-Specific Usability Issues Many of the usability issues raised by EmNets are common to all complex information systems. However, there are characteristics of ubiquitous computing in general and EmNets in particular that present new and unique challenges to the usability engineer. In particular, the distributed nature of EmNets and their often intimate coupling with the physical environment represent a fundamentally new relationship between device and user. A personal computer is a thing one sits in front of and uses. How will end users think about EmNets? Probably not as “things.” They may think of them as capabilities, as smart spaces, or as properties of the built environment. They may think of them as magic. Often, they will not think of them at all. The usability of such systems will not be the sum of the usability of their component parts. It will instead be an emergent property of the behaviors of the visible nodes and their invisible counterparts, of their interactions, and of the physical environments to which they are coupled. What is the source of global coherence in a system that may be spatially distributed, incrementally designed, and implemented using heterogeneous and independently developed components? Although the existence of such system-level behavior, as a superset of the behavior of the individual components, is not new, it is nonetheless 21 The relationship between implementation models and user models is discussed at length by Cooper (1995) and Tognazzini (1992).
OCR for page 144
Embedded, Everywhere: A Research Agenda for Networked Systems of Embedded Computers difficult to address. What is new is that the very existence of the complex system may be unknown to the end user.22 Usability Research Topics Deserving Attention EmNets raise interesting challenges related to the usability of systems with emergent properties. When large networks of devices are used to create smart environments, for example, the process of designing these networks to enhance usability and of ensuring helpful effective models will be complicated by the very complexity of these systems. More research is needed in the following areas: Design for users and interaction. Approaches need to be developed for designing EmNets of increasing complexity that are usable with minimal training and without detailed knowledge of the system design or of the complex interconnections among system components. EmNets should be designed to accommodate users with varying skill levels and to accommodate the fact that they will often be invisible to the individuals interacting with them. Appropriate conceptual models. Further study is needed on the construction of appropriate conceptual models—that is, models that describe 22 A further consideration is the relationship between EmNets and their operators. One could speculate that the experience might be less like running a specific machine than participating in a confederation. A lot will be going on, couplings will often be loose. One could also imagine the operator finding himself or herself more in the role of influencer than absolute controller. For example, EmNets widely coupled to the outside world may have severe responsiveness constraints that prevent the immediate execution of operator commands. In spatially distributed systems, communications cannot be instantaneous, and in bandwidth-constrained situations may be extremely sluggish. This, too, may contribute to the operator’s sense of being only loosely coupled to the system. Efforts should be made to generalize lessons learned from the control of existing EmNets or EmNet-like systems, such as the telephone network and the power grid, both of which have benefited from a great deal of rigorous human factors research. Research synergies may also exist with areas of distributed control being worked on by DARPA and other agencies, such as collaborations between humans and confederations of agents and control of robot swarms. In many cases, the locus of interaction design is likely to shift from user/device interactions to user/information interactions. The emerging disciplines of information architecture and human information interaction (Gershon, 1995, Lucas, 2000) shift the focus of design from devices as such to the information that those devices mediate. Examples of research topics in this area include architectures for universal identity of data objects, replication architectures, techniques for maintaining perceived constancy of identity across heterogeneous display media, tangible interface techniques (Ishii and Ullmer, 1997), and information-centric user interfaces and polymorphic rendering (Roth et al., 1997).
OCR for page 145
Embedded, Everywhere: A Research Agenda for Networked Systems of Embedded Computers the critical aspects of the system and that are understandable and usable by people. Further study is also needed on developing appropriate specifications. People need to learn how to design for both novice and expert use of EmNets and for situations where the person interacting with the system is not aware of any interaction. Furthermore, attention needs to be paid to the different types of assistance that various users will need. System maintenance personnel will have a different and often deeper understanding of the system than will system operators. REFERENCES Computer Science and Telecommunications Board (CSTB), National Research Council. 1999. Trust in Cyberspace. Washington, D.C.: National Academy Press. CSTB, National Research Council. 2000a. The Digital Dilemma: Intellectual Property in the Information Age. Washington, D.C.: National Academy Press. CSTB, National Research Council. 2000b. Summary of a Workshop on Information Technology Research for Federal Statistics. Washington, D.C.: National Academy Press. CSTB, National Research Council. 2000c. Making IT Better: Expanding Information Technology Research to Meet Society’s Needs. Washington, D.C.; National Academy Press. CSTB, National Research Council. 2001. The Internet’s Coming of Age. Washington, D.C.; National Academy Press. Cooper, A. 1995. About Face: The Essentials of User Interface Design. Foster City, Calif.: IDG Books. Fisher, David A. 1998. Design and Implementation of EASEL: A Language for Simulating Highly Distributed Systems. Pittsburgh, Pa.: Carnegie Mellon University. Available online at <http://www.sei.cmu.edu/programs/nss/design-easel.pdf>. Friedman, B. 1999. Value-Sensitive Design: A Research Agenda for Information Technology. No. SBR-9729633. Washington, D.C.: National Science Foundation. Gershon, Nahum. 1995. “Human information interaction,” Fourth International World Wide Web Conference, December. Boston, Mass. Government Printing Office (GPO). Code of Federal Regulations. Title 47, Vol. 3, Parts 40 to 69, revised as of October 1, 1998. Available online at <http://frwebgate2.access.gpo.gov/cgibin/waisgate.cgi?WAISdocID=177665407+1+0+0&WAISaction=retrieve>. Hunt, Warren. 1994. “FM8501: A verified microprocessor.” Ph.D. dissertation, LNCS 795. Heidelberg, Germany: Springer-Verlag. Abstract available online at <http://www.cli.com/hardware/fm8501.html>. Ishii, Hiroshi, and Brygg Ullmer. 1997. Presentation at CHI 97 Conference on Human Factors in Computing Systems, March. Lessig, Lawrence. 1999. Code and Other Laws of Cyberspace. New York: Basic Books. Leveson, N.G. 1995. Safeware: System Safety and Computers. Reading, Mass.: Addison-Wesley. Leveson, N.G., J.D. Reese, S. Koga, L.D. Pinnel, and S.D. Sandys. 1997. “Analyzing requirements specifications for mode confusion errors,” Workshop on Human Error, Safety, and System Development, Glasgow. Lucas, Peter. 2000. “Pervasive information access and the rise of human-information interaction.” Proceedings of ACM CHI ‘00 Conference on Human Factors in Computing Systems. Invited session, April. Lutz, R.R. 1993. “Analyzing software requirements errors in safety-critical embedded systems.” Proceedings of the IEEE International Symposium on Requirements Engineering, January.
OCR for page 146
Embedded, Everywhere: A Research Agenda for Networked Systems of Embedded Computers Neisser, U. 1976. Cognition and Reality. San Francisco, Calif.: W.H. Freeman and Co. Nielsen, J. 1994. “Heuristic evaluation.” Usability Inspection Methods. J. Nielsen and R.L. Mack, eds. New York: John Wiley & Sons. Nielsen, J., and R. Molich. 1990. “Heuristic evaluation of user interfaces.” Proceedings of ACM CHI ’90 Conference on Human Factors in Computing Systems. Norman, D.A. 1998. The Invisible Computer. Cambridge, Mass.: MIT Press. Roth, S.F., M.C. Chuah, S. Kerpedjiev, J.A. Kolojejchick, and P. Lucas. 1997. “Towards an information visualization workspace: Combining multiple means of expression.” Human-Computer Interaction Journal 12(1 and 2):131-185. Sarter, N.D., and D. Woods. 1995. “How in the world did I ever get into that mode? Mode error and awareness in supervisory control.” Human Factors (37) 5-19. Schneider, Fred B. 1993. “What good are models and what models are good.” Distributed Systems, S. Mullender, ed. Reading, Mass.: Addison-Wesley. Thibodeau, Patrick. 2000. “‘Huge’ privacy questions loom as wireless use grows.” Computerworld, December 18. Tognazzini, Bruce. 1992. Tog on Interface. Reading, Mass.: Addison-Wesley. Wiener, Earl L., and Renwick E. Curry. 1980. “Flight-deck automation: Promises and problems.” Ergonomics 23(10):995-1011. BIBLIOGRAPHY Card, S.K., T.P. Moran, and A. Newell. 1980. “Computer text-editing: An information processing analysis of a routine cognitive skill.” Cognitive Psychology 12:32-74. Card, S.K., T.P. Moran, and A. Newell. 1983. The Psychology of Human-Computer Interaction. Hillsdale, N.J.: Lawrence Erlbaum Associates. Fowler, M., and K. Scott. 1997. UML Distilled: Applying the Standard Object Modeling Language. Reading, Mass.: Addison-Wesley. Gray, W.D., B.E. John, and M.E. Atwood. 1993. “Project Ernestine: Validating a GOMS Analysis for Predicting and Explaining Real-World Task Performance.” Human-Computer Interaction 8(3):237-309. Kieras, D., and P.G. Polson. 1985. “An approach to the formal analysis of user complexity.” International Journal of Man-Machine Studies 22:365-394. Minsky, M. 1974. “A framework for representing knowledge.” MIT-AI Laboratory Memo 306. (Shorter version in Readings in Cognitive Science, Allan Collins and Edward E. Smith, eds., San Mateo, Calif.: Morgan-Kaufmann, 1992.) Perrow, C. 1984. Normal Accidents: Living with High-Risk Technology. New York: Basic Books. Schank, R., and R. Abelson, 1977. Scripts, Plans, Goals and Understanding. Hillsdale, N.J.: Erlbaum Associates.
Representative terms from entire chapter: