Skip to main content

Currently Skimming:

4 Some Potential Research Directions for Furthering the Usability, Security, and Privacy of Computer Systems
Pages 24-36

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 24...
... The following sections summarize research direc­ tions that emerged from the questions posed to workshop participants and from breakout sessions, reports back from the breakout sessions, and plenary presentations and discussion. DIMENSIONS OF uSAbILITy, SECuRITy, AND PRIvACy Definitions Breakout session participants spent a considerable amount of time grappling with how to define usable security, working under the belief that one cannot improve something that cannot be measured and that one cannot measure something without a good definition for what one seeks to measure.
From page 25...
... Finally, participants cautioned that academic studies of usability are not necessarily representative of the user population. They typically employ small groups of college students, which reflects poor experimental design for two reasons: the group sizes are too small, and they are not drawn from a group that is representative of the broader population.
From page 26...
... That is, there are few good ways to determine the effectiveness or utility of any given security measure, and the development of metrics remains an open area of research.1 With respect to usability, participants noted a multitude of potentially relevant measures for usability (which might be measured in terms of user errors, time required to configure or modify a system, time to master a system, or user satisfaction ratings) and system effec­ tiveness or utility.
From page 27...
... Workshop presentations and discussions approached this topic from several perspectives: user mental models, risk perception and communication, and user incentives. (Incentives, another important topic with respect to understanding users and their motivations, are con­ sidered separately below, because they also apply to other actors.)
From page 28...
... Taking an epidemiological perspec ­ tive, it would be useful to understand how many individual users' mental models would have to be changed to make a noticeable impact in improv­ ing computer security "for the masses." A fourth topic is to study how well users understand their own user model. Can they assess their technical proficiency well enough to under­ stand whether or not they are capable of making informed security deci ­ sions?
From page 29...
... Better education can help overcome usability challenges; however, workshop participants cautioned that an empha ­ sis on education not be used as an excuse for not improving usability. One suggested area for research is to achieve a better understanding of the knowledge that users currently have and how they attained that knowledge.
From page 30...
... For example, employees can be given positive incentives through the use of awards for maintaining good security, or they can be given negative incentives through reprimands or poorer evaluations for security failures. In the mar­ ketplace, positive incentives might include favorable reviews of products with better security, whereas negative incentives would include liability for inadequate security or negative reports in the press.
From page 31...
... Behavioral aspects surfaced repeatedly in the workshop discussions, notably in the observation that sometimes individuals seem not to act in a fully rational way in protecting security. Such seemingly irrational behavior can have multiple explanations -- actors not being well informed, actors considering a wider range of outcomes than have been anticipated by the system designer, or such ideas as "bounded rationality" that have been developed in behavioral economics.
From page 32...
... Systems often require users to change passwords periodically, which may also lead to users' writing them down or using guessable mnemonic schemes for generating their passwords. Systems typically require their own passwords, often with conflicting rules about acceptable user names and passwords, meaning that users must keep track of a wide array of credentials.
From page 33...
... Processes and Tools Participants suggested a number of development and manage­ ment processes and tools that would help advance usable security and privacy -- as well as associated research challenges: 2 One commercial example of a technology for preventing the linking of visits across multiple parties that rely on a common identifier is "U­Prove," offered by Credentica, a firm recently purchased by Microsoft. It relies on a zero­knowledge scheme developed by Stefan Brands and colleagues.
From page 34...
... Par­ ticipants suggested several questions regarding how these conventional abstractions might be reconsidered in order to enhance usable security and privacy. How "far down the stack" -- that is, how far down into the design of the underlying system -- is it necessary to go to provide usable security?
From page 35...
... The green machine, used for important things, would demand account ­ ability, whereas the red one would not. This approach immediately raises a usability question: How does a user readily identify green and red machines and understand their distinct purposes?
From page 36...
... and blacklists (lists of prohibited entities) can both be used to authorize access or to grant privileges.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.