Bug Bounties and Whistle-Blowers
The bug bounty—paying for information about systems problems—stands in marked contrast to the more common practice of discouraging or dissuading whistle-blowers (defined in this context as one who launches an attack without malicious intent), especially those from outside the organization that would be responsible for fixing those problems. Yet the putative intent of the whistle-blower and the bug bounty hunter is the same—to bring information about system vulnerabilities to the attention of responsible management. (This presumes that the whistle-blower’s actions have not resulted in the public release of an attack’s actual methodology or other information that would allow someone else with genuine malicious intent to launch such an attack.) Whether prosecution or reward is the correct response to such an individual has long been the subject of debate in the information technology community.
Consider, for example, the story of Robert Morris, Jr., the creator of the first Internet worm in 1988. Morris released a self-replicating, self-propagating program onto the Internet. This program—a worm—replicated itself much faster than Morris had expected, with the result that computers at many sites, including universities, military sites, and medical research facilities, were affected. He was subsequently convicted of violating Section 2(d) of the Computer Fraud and Abuse Act of 1986, 18 U.S.C. §1030(a)(5)(A) (1988), which punishes anyone who intentionally accesses without authorization a category of computers known as “[f]ederal interest computers” and damages or prevents authorized use of information in such computers, causing the loss of $1,000 or more. However, at the time, a number of commentators argued for leniency in Morris’s sentencing on grounds that he had not anticipated the results of his experiment, and further that his actions had brought an important vulnerability into wide public view and thus he had provided a valuable public service. It is not known if these arguments swayed the sentenc-
Another as-yet untried mechanism for sharing information is based on derivative contracts, by which an underwriter issues a pair of contracts: Contract A pays its owner $100 if on a specific date there exists a certain well-specified vulnerability X for a certain system. The complementary Contract B pays $100 if on that date X does not exist. These contracts are then sold on the open market. The issuer of these contracts breaks even, by assumption. If the system in question is regarded as highly secure by market participants, then the trading price for Contract A will drop—it is unlikely that X will be found on that date, and so only speculators betting against the odds will buy Contract A (and will likely lose their [small] investment). By contrast, the trading price for Contract B will remain near $100, so investors playing the odds will profit only minimally but with high probability. The trading prices of Contracts A and B thus reflect