possible significance is that security inherently involves an actor other than the user—the active adversary who will try to take advantage of usability flaws and may also attempt to mislead the user through “social engineering.” Another is that security involves focusing the user’s attention not only on the task at hand but also on the future consequences and aftereffects of the task. Yet another is that security is generally not the end user’s primary concern. Further investigation of the similarities and differences might yield insights as to what lessons can be transferred directly from other usability work and where the issues are in fact different.

METRICS, EVALUATION CRITERIA, AND STANDARDS

Metrics—that is, measures of how usable or secure a system is—are important to assessing progress (e.g., how much better is this system than another one?) and making rational decisions about investment (e.g., is this system “good enough” or is further investment in improvements warranted?). Workshop participants observed that security has long resisted precise measurement—let alone in combination with usability. That is, there are few good ways to determine the effectiveness or utility of any given security measure, and the development of metrics remains an open area of research.1 With respect to usability, participants noted a multitude of potentially relevant measures for usability (which might be measured in terms of user errors, time required to configure or modify a system, time to master a system, or user satisfaction ratings) and system effectiveness or utility. Further research would help identify which of these measures, or what others, are most useful.

Related to metrics is the question of what criteria should be used in evaluating and accepting the usability and security of an IT system and how one might go about certifying a system as aligning security, privacy, and usability. How might such criteria be instantiated as future guidelines? Are there exemplar software applications that could be identified as benchmarks for security and usability and therefore serve as a source for creating a set of criteria for usable, yet secure, systems? Several discussions considered how such criteria might vary accordingly to application, context, or perspective. For example, how might one divide applications into categories in which similar weights would be given to security and usability? Despite the likely differences among the categories, might it be possible to develop a common checklist that contains a core set of usability and security criteria that would cover 80 percent of all applications?

1

For a detailed discussion of the challenges associated with cybersecurity metrics and possible research directions, see NRC, Toward a Safer and More Secure Cyberspace, 2007, pp. 133-142.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement