they can do their jobs, and so forth. Frequently the costs and risks are unknown, and there are no easy ways to estimate them.

A proper allocation of economic incentives is essential to improving computer security. Users, administrators, organizations, and vendors respond to the incentives that they perceive. Users just want to get their work done. Without an appropriate understanding of the risks involved and how proper security may help avoid those risks, they view security as a burden, causing them to ignore it or to attempt to work around it. Organizations do not measure the cost of the time that users spend on security and therefore do not demand usable security. Vendors thus have minimal incentive to supply it.

Many people think that security in the real world is based on locks. In fact, real-world security depends mainly on deterrence and hence on the possibility and severity of punishment. The reason that one’s house is not burgled is not that the burglar cannot get through the lock on the front door; rather, it is that the chance of getting caught, while small, together with a significant punishment, makes burglary uneconomic. It is difficult to deter attacks on a computer connected to the Internet because it is difficult to find “the bad guys.” One way to fix this is to communicate only with parties that are accountable, that one can punish. There are many different punishments: money fines, ostracism from some community, firing, jail, and other options.

Some punishments require identifying the responsible party in the physical world, but others do not. For example, to deter spam, one might reject e-mail unless it is signed by someone known to the receiver or unless it comes with “optional postage” in the form of a link certified by a trusted third party, such as Amazon or the U.S. Postal Service; if one clicks the link, the sender contributes a dollar to a charity.

The choice of safe inputs and the choice of accountable sources are both made by one’s own system, not by any centralized authority. These choices will often depend on information from third parties about identity, reputation, and so forth, but which parties to trust is also one’s own choice. All trust is local.

To be practical, accountability needs an ecosystem that makes it easy for senders to become accountable and for receivers to demand it. If there are just two parties, they can get to know each other in person and exchange signing keys. Because this approach does not scale, there is also a need for third parties that can certify identities or attributes, as they do today for cryptographic keys. This need not hurt anonymity unduly, since the third parties can preserve anonymity except when there is trouble, or accept bonds posted in anonymous cash.

This scheme is a form of access control: you accept input from me only if I am accountable. There is a big practical difference, though, because



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement