Looking to the Future
WHY HAS LITTLE ACTION OCCURRED?
The Committee on Improving Cybersecurity Research in the United States believes that the cybersecurity threat is real, imminent, and growing in severity. Moreover, as one of the most technologically advanced nations in the world, the United States has much to lose from the materialization of this threat. But this committee is not the first committee—and this report is not the first report—to make this claim.
As early as 1973, the Electronic Systems Division of the U.S. Air Force noted the ease with which then-contemporary systems (such as OS/360 and GCOS) had been penetrated and argued that fundamental design flaws were responsible for allowing these penetrations.1 In 1974, Fortune published an article for the general public presenting a general overview of the vulnerability of multiaccess computer systems to unauthorized tampering, the reliability of access controls, and ways in which systems have been exploited.2
In 1991, the National Research Council weighed in. Computers at Risk stated:3
R.R. Schell, P.J. Downey, and G.J. Popek, “Preliminary Notes on the Design of Secure Military Computer Systems,” January 1973, HQ Electronic Systems Division, Hanscom Air Force Base; available at http://csrc.nist.gov/publications/history/sche73.pdf.
T. Alexander, “Waiting for the Great Computer Rip-Off,” Fortune, 90(1): 142-150, July 1974.
National Research Council, Computers at Risk: Safe Computing in the Information Age, National Academy Press, Washington, D.C., 1991.
We are at risk. Increasingly, America depends on computers. They control power delivery, communications, aviation, and financial services. They are used to store vital information, from medical records to business plans to criminal records. Although we trust them, they are vulnerable to the effects of poor design and insufficient quality control, to accident, and perhaps most alarmingly, to deliberate attack. The modern thief can steal more with a computer than with a gun. Tomorrow’s terrorist may be able to do more damage with a keyboard than with a bomb.
Computers at Risk was also one of the first reports to suggest that networking between computers would dramatically worsen the cybersecurity situation by enabling problems to propagate electronically and by enlarging the set of potential attackers—and indeed this is exactly what has taken place.
In 1997, the President’s Commission on Critical Infrastructure Protection noted:4
[T]he right command sent over a network to a power generating station’s control computer could be just as devastating as a backpack full of explosives, and the perpetrator would be more difficult to identify and apprehend….
[Furthermore,] the rapid growth of a computer-literate population ensures that increasing millions of people around the world possess the skills necessary to conduct such an attack. The wide adoption of common protocols for system interconnection and the availability of “hacker tool” libraries make their task easier.
While the possibility of chemical, biological, and even nuclear weapons falling into the hands of terrorists adds a new and frightening dimension to physical attacks, such weapons are difficult to acquire. In contrast, the resources necessary to conduct a cyber attack have shifted in the past few years from the arcane to the commonplace. A personal computer and a telephone connection to an Internet Service Provider anywhere in the world are enough to cause harm….
The Commission has not discovered an immediate threat sufficient to warrant a fear of imminent national crisis. However, we are convinced that our vulnerabilities are increasing steadily, that the means to exploit those weaknesses are readily available and that the costs associated with an effective attack continue to drop. What is more, the investments required to improve the situation—now still relatively modest—will rise if we procrastinate.
President’s Commission on Critical Infrastructure Protection, Critical Foundations: Protecting America’s Infrastructures, October 1997; available at www.fas.org/sgp/library/pccip.pdf.
Two years later, the National Research Council released another report, Trust in Cyberspace,5 which argued that it was necessary to
move the focus of the [cybersecurity] discussion forward from matters of policy and procedure and from vulnerabilities and their consequences toward questions about the richer set of options that only new science and technology can provide.
Trust in Cyberspace reiterated the emphasis on the security challenges posed by interconnected information technologies and networked information systems. It suggested that the research agenda would be driven in large part by the (then) newly found appreciation of the vulnerability of the nation’s critical infrastructure to new forms of attack.
In 2003, the Bush administration released The National Strategy to Secure Cyberspace.6 This report called attention to a threat of “organized cyber attacks capable of causing debilitating disruption to our Nation’s critical infrastructures, economy, or national security.” It further pointed out that “the attack tools and methodologies are becoming widely available, and the technical capability and sophistication of users bent on causing havoc or disruption is improving.” As for the consequences of cyber vulnerabilities, it noted:
In peacetime America’s enemies may conduct espionage on our Government, university research centers, and private companies. They may also seek to prepare for cyber strikes during a confrontation by mapping U.S. information systems, identifying key targets, and lacing our infrastructure with back doors and other means of access. In wartime or crisis, adversaries may seek to intimidate the Nation’s political leaders by attacking critical infrastructures and key economic functions or eroding public confidence in information systems….
Cyber attacks on United States information networks can have serious consequences such as disrupting critical operations, causing loss of revenue and intellectual property, or loss of life. Countering such attacks requires the development of robust capabilities where they do not exist today if we are to reduce vulnerabilities and deter those with the capabilities and intent to harm our critical infrastructures.
In 2005, the President’s Information Technology Advisory Committee (PITAC) released Cyber Security: A Crisis of Prioritization.7 This report noted:
The Nation’s information technology (IT) infrastructure, still evolving from U.S. technological innovations such as the personal computer and the Internet, today is a vast fabric of computers—from supercomputers to handheld devices—and interconnected networks enabling high-speed communications, information access, advanced computation, transactions, and automated processes relied upon in every sector of society. Because much of this infrastructure connects one way or another to the Internet, it embodies the Internet’s original structural attributes of openness, inventiveness, and the assumption of good will….
These signature attributes have made the U.S. IT infrastructure an irresistible target for vandals and criminals worldwide. The PITAC believes that terrorists will inevitably follow suit, taking advantage of vulnerabilities including some that the Nation has not yet clearly recognized or addressed. The computers that manage critical U.S. facilities, infrastructures, and essential services can be targeted to set off system-wide failures, and these computers frequently are accessible from virtually anywhere in the world via the Internet.
The reports mentioned above are only some of those issued in the past 15 years regarding the nation’s cybersecurity posture. Taken as a whole and as described in Appendix B, these reports point to an imminent and growing cybersecurity threat. Why then is there not a national sense of urgency about cybersecurity? Why has action not been taken to close the gap between our cybersecurity posture and the cyberthreat?
The notion that no action to promote cybersecurity has been taken in the past 15 years is somewhat unfair. In recent years, most major information technology (IT) vendors have undertaken significant efforts to improve the security of their products in response to end-user concerns over security. Many of today’s products are by many measures more secure than those that preceded these efforts. In addition, the sentinel events of September 11, 2001, spurred public concerns about security, and some of that concern has spilled over into the cybersecurity domain.
Nevertheless, these changes in the environment, important though they are, do not change the fact that the action taken in the last 15 years is nowhere near what is necessary to achieve a robust cybersecurity posture. Consider then the consequences of inadequate action, and imagine that sometime in the future the nation experiences what some have called a “digital Pearl Harbor.” In the subsequent investigative frenzy, the nation asks, “How could this have happened?”
A digital Pearl Harbor would—by definition—be a surprise. But it would also be a surprise that could have been anticipated. In 2004,
velopment, Washington D.C., February 2005; available at www.nitrd.gov/pitac/reports/20050301_cybersecurity/cybersecurity.pdf. Hereafter, “the PITAC report.”
Bazerman and Watkins described a predictable surprise as an event that takes an individual or a group by surprise, despite prior awareness of all of the information necessary to anticipate the events and their consequences.8 In particular, they identify several characteristics of predictable surprises:
Leaders know that a problem exists and that the problem will not solve itself.
The problem worsens over time.
Solutions for the problem incur significant costs in the present, while the benefits of taking action—although likely larger than the solution costs—are both uncertain and realized in the future.
Some parties whose efforts are needed to help solve the problem benefit from inaction.
To explain inaction, Bazerman and Watkins posit causes at the individual, organizational, and political levels. Individual causes of inaction are rooted in cognitive biases that lead individuals to discount the future more heavily than is appropriate and thus to undervalue risks. They also prefer to run the risk of low-probability high-consequence events in the future rather than to incur certain but smaller losses in the present. Finally, they find it difficult to take action when they have not personally experienced a problem and cannot imagine what it would mean in practical terms.
Organizations fail to act because they do not have processes in place to scan the environment for all sources of threat, to integrate those sources of information, to respond in a timely manner, or to incorporate lessons learned from those responses into their institutional memory. They also have structural issues that inhibit a coordinated response to the problem and/or have incentives in place that encourage people to behave in a way that damages the ability to achieve organizational goals.
Politically, leaders are reluctant to make decisions that impose certain costs now for benefits that will almost certainly not be realized within their terms of office.
Most of these conditions can be seen in examining the current environment for cybersecurity. Policy makers have been warned repeatedly that there is a cybersecurity problem and that without action the problem will not solve itself. All signs point to a worsening of the cybersecurity problem, and the only argument today is how fast it is getting worse. It is
simply not credible to assert that the problem is getting better. Putting into place adequate cybersecurity measures, both technical and procedural, will cost in terms of reduced productivity, increased expense, and greater inconvenience, although the costs of such measures are dwarfed by the potential future benefits of avoiding certain kinds of cyber-disasters. And, both vendors and users of information technology benefit from inaction, because they can avoid the costs of changing existing practices.
From the committee’s perspective, the lack of adequate action in the cybersecurity space can be largely explained by three complementary reasons:
The various cybersecurity reports issued to date have not provided the sufficiently compelling information needed to make the case for dramatic and urgent action. If so, a sufficiently ominous threat cloud will inspire decision makers to take action. But it is well known that detailed and specific information is usually more convincing than information couched in very general terms—unfortunately, detailed and specific information in the open literature about the scope and nature of the cyberthreat is lacking.
Even with the relevant information in hand, decision makers discount future possibilities so much that they do not see the need for present-day action. In this view, nothing short of a highly visible and perhaps ongoing cyber-disaster will motivate actions. Decision makers weigh the immediate costs of putting into place adequate cybersecurity measures, both technical and procedural, against the potential future benefits (actually, avoided costs) of preventing cyber-disaster in the future—and systematically discount the latter as uncertain and vague.
The costs of inaction are not borne by the relevant decision makers. The bulk of the nation’s critical infrastructure is owned and operated by private-sector companies. To the extent that these companies respond to security issues, they generally do so as one of the risks of doing business. But they do much less to respond to the threat of low-probability, high-impact (i.e., catastrophic) threats, even though all of society at large has a significant stake in their actions.9
As for the impact of research on the nation’s cybersecurity posture, it is not reasonable to expect that research alone will make any substantial difference. Indeed, there is a very large gap between a successful “in principle” result or demonstration and its widespread deployment and use. Closing this gap is the focus of Category 3 research, described in Chapter 6. But, as this report argues, many other factors must be aligned in addition if research is to have a significant impact. Specifically, IT vendors must be willing to regard security as a product attribute that is coequal with performance and cost, IT researchers must be willing to value cybersecurity research as much as they value research into high-performance or cost-effective computing, and IT purchasers must be willing to incur present-day costs in order to obtain future benefits.
PRIORITIES FOR ACTION
Despite the analysis of Section 10.1, the committee believes that meaningful action is possible to improve the cybersecurity posture of the nation. In certain contexts, it may be that the security risks inherent in using IT may outweigh the benefits of doing so, even after everything possible has been done to improve security in those contexts. (It is, of course, a topic worthy of research in itself to develop a decision-making framework that would help to identify such contexts.)
Nevertheless, for the majority of contexts in which IT is today or will in the future be a necessary enabler, a set of circumstances does give the committee hope that progress is indeed possible. Especially outside the intelligence community, it is increasingly common to find security practitioners and researchers who realize that risk management, rather than risk avoidance, is the name of the game. This realization makes it possible for managers to take pragmatics steps forward rather than waiting for the silver bullet to be found. A more powerful technological base that can support approaches and techniques previously deemed unfeasible for technological reasons is now also available. Most importantly, there is a growing awareness among end users that cybersecurity should be a more serious consideration in their acquisition decisions than it was in the past. This is likely to increase the demand for greater cybersecurity functionality.
The committee has identified the five action items below as warranting the highest priority. Policy makers should carry out the following actions:
Create a sense of urgency about the cybersecurity problem commensurate with the risks.
Commensurate with a rapidly growing cybersecurity threat, sup-
port a robust and sustained research agenda at levels which ensure that a large fraction of good ideas for cybersecurity research can be explored.
Establish a mechanism for continuing follow-up on a research agenda.
Support infrastructure for cybersecurity research.
Sustain and grow the human resource base.
Item 1: Create a sense of urgency about the cybersecurity problem commensurate with the risks.
Some lessons can be learned from the nation’s response to the Y2K (year 2000) problem. In the early years of information technology, a programming practice arose of recording dates in a six-digit format (mm/ dd/yy). If programs embedding this practice were operative at the turn of the century, the result could have been that the year “2000” (recorded as “00”) would be interpreted as the year “1900,” thus causing many date comparisons to be made incorrectly. Since this programming practice was widespread, and in particular was likely used in many critical systems, concerns arose that many of these critical systems would fail if this problem was not fixed.
Both the extent and the severity of the problem were largely unknown, but the timing of the problem was absolutely clear and unambiguous. In real-time date-dependent systems that used two-digit years, the problem would manifest itself on January 1, 2000, at midnight. In other systems, the problem would manifest itself upon first system startup after January 1, 2000. Consequently, many efforts were made to focus attention on the issue and to effect repairs. These efforts included legislation, public education and awareness, the replacement of old information technology, the development of backup and contingency plans, and insurance policies covering problems resulting from the Y2K problem.
In the late 1990s, the Y2K problem was seen as an urgent one. Moreover, in many ways, the Y2K problem can be regarded as a kind of cybersecurity problem. Plausible arguments existed suggesting that Y2K problems were potentially widespread and serious. Limited testing demonstrated, in a number of systems, the actual existence of Y2K problems. Nevertheless, the actual nature and scope of problems caused by two-digit years were unknown. Y2K problems in one system often had ramifications for the proper operation of other systems to which it was connected. Business considerations, including continuity of operations, insurance, and liability, played important roles in motivating corrective actions.
The national response to the Y2K problem demonstrates that it is possible to take action on a large scale in response to an impending
emergency. However, in one very fundamental aspect, the Y2K problem and today’s cybersecurity problem are different. The Y2K problem was certain to arrive on a specific date known to everyone (and the nature of the problem was well understood), whereas the arrival date and specific nature of a “digital Pearl Harbor” are highly uncertain. How, then, can a sense of urgency be created in the absence of a natural forcing deadline?
From the committee’s perspective, two actions are necessary, both motivated by the discussion of Section 10.1. The first action relates to making more information available. Because it is possible, though in the committee’s view unlikely, that the information available to decision makers is inadequate, the compilation of a truly authoritative threat assessment could have salutary benefits. But to be truly authoritative, this assessment would have to draw on the best industry and intelligence data available. Indeed, some of the necessary information is not available today in any meaningful sense, since many victims of cybersecurity incidents are reluctant to discuss these incidents for public attribution, and other data are classified.
Arrangements must thus be made to incentivize these parties to release the information, as discussed in Sections 18.104.22.168 and 22.214.171.124. At the same time, actions must be taken to relieve the concerns of victimized parties about the harm that might result from the release of such information.
The notion of developing measures to increase transparency and provide relevant information so that consumers can make informed decisions is not new, and some steps in this direction have been taken. For example, within the National Security Telecommunications Advisory Committee (NSTAC) context, incident information (e.g., outages, causes) is shared in the relevant community subject to a confidentiality requirement. The InfraGard program is a Federal Bureau of Investigation (FBI)-sponsored effort that brings together businesses, academic institutions, state and local law enforcement agencies, and other participants to share information and intelligence preventing hostile acts in cyberspace. The Department of Homeland Security (DHS) has established “Procedures for Handling Protected Critical Infrastructure Information” that govern the receipt, validation, handling, storage, marking, and use of critical infrastructure information voluntarily submitted to the DHS.10 Nevertheless, such sharing proceeds somewhat tentatively. Firms have an incentive to free-ride on the information security expenditures of the other members of sharing organizations (“the tragedy of the commons”), and additional incentives need to be developed for firms to fully and truthfully reveal
Federal Register, 71(170), September 1, 2006. See http://edocket.access.gpo.gov/2006/06-7378.htm.
security information so that the social welfare benefits of sharing can be accrued.11 A second reason for a reluctance to share information is that, for a given incident, a fix for the problems that caused it may not be immediately available. Sometimes, even the mere statement that there is a vulnerability in a particular system is enough to prompt special attention to that system from would-be attackers—attention that might result in the discovery of that vulnerability.
A first step toward an authoritative threat assessment could have been the National Computer Security Survey sponsored by the Bureau of Justice Statistics at the Department of Justice (DOJ) and the National Cyber Security Division (NCSD) at the DHS. Conducted by the RAND Corporation, this study was scheduled to be published in 2007, and would have had the advantage of being able to provide legal protection for the information provided by survey respondents. Statutory provisions protect the confidentiality of the information provided, prohibit the sharing of data with other agencies, provide exemptions from the Freedom of Information Act (FOIA), and ensure immunity from legal processes.12 However, to be truly valuable for understanding the evolving threat and trends, the survey would have to be conducted on a regular and ongoing basis. Unfortunately, this task was terminated before its completion by the DOJ and the NCSD.
Section 10.1 also indicated the possibility—indeed, in the committee’s view, the great likelihood—that adequate information on the cybersecurity threat is available today. Thus, the second action calls for changing the decision-making calculus that excessively focuses vendor and end-user attention on the short-term costs of improving their cybersecurity postures.
Calls to change the decision-making calculus are often regarded suspiciously by those who would be affected by such changes—not surprisingly, since their bases for business planning would, by definition, be changed. As noted in Sections 126.96.36.199 and 188.8.131.52, there is enormous political resistance to notions of change that entail direct regulation or liability, resistance that in some cases is well grounded in uncertainty about ultimate effects. This is not to say that it is impossible to take meaningful policy action—only that such action may have to be more indirect and less obvious than some might prefer. Such policy actions might include, for example, encouraging accounting firms and insurance firms to take into
See Lawrence A. Gordon, Martin P. Loeb, and William Lucyshyn, “Sharing Information on Computer Systems Security: An Economic Analysis,” Journal of Accounting and Public Policy, 22(6): 461-485, 2003.
Department of Justice, Bureau of Justice Statistics, National Computer Security Survey Web page, http://www.ojp.usdoj.gov/bjs/survey/ncss/ncss.htm. The law, noted on this Web page, is P.L. 107-347, Title V and 44 U.S.C. Paragraph 3501.
account the cybersecurity postures of their customers when providing audits or setting insurance rates.
The committee recognizes that policy actions are, almost by definition, less compelling for focusing attention and stimulating action than are deadlines imposed by nature. But in the committee’s view, even weaker policy actions can stimulate some action, and every little bit helps.
Finally, although the committee did not take a position regarding the desirability of regulation or liability as a way to improve cybersecurity, it did agree that regulation and liability are tools of last resort to promote this end. In other words, the nation should not turn to regulation or liability as an approach to improving cybersecurity until decision makers conclude that other approaches have proven insufficiently effective. In the meantime, while awaiting that judgment, it behooves the research community to consider how the tools of regulation and liability might sensibly be applied should those tools of last resort ultimately prove necessary. The alternative to such interim research is an ill-considered and unresearched regime of liability and regulation that might well be imposed hastily in the wake of a crisis, to the detriment of all. That is, the nation should not turn to regulation or liability as an approach to improving cybersecurity until decision makers conclude that other approaches have proven insufficiently effective.
Item 2: Commensurate with a rapidly growing cybersecurity threat, support a robust and sustained research agenda at levels which ensure that a large fraction of good ideas for cybersecurity research can be explored.
Given the need for breadth and diversity in the research portfolio within the areas of focus described in Part II of this report, the committee believes that the nation is ill served by a funding model that seeks to channel resources to a small number of specific research topics. Instead, it makes more sense to conceptualize the overall research portfolio as one that focuses resources on sustaining the intellectually broad and diverse community capable of (1) generating ideas across a wide waterfront (as one might expect would be needed for a diverse threat) and (2) producing the cybersecurity expertise needed across all points in the IT life cycle, including design, development, implementation, testing, operations, maintenance, upgrading, and retirement. Note further that breadth in the research agenda does not mean that every topic should be funded equally. Rather, it is the merits and rationales of individual proposals,
combined with a cognizance of the threat environment and advances in technology, that should determine funding allocations.
With this model, the scale of the necessary funding is set by the amounts needed to sustain this community at appropriate levels and to ensure that a large fraction of good ideas for cybersecurity research can be explored. In this context, a good idea is one that is determined to be good through some kind of evaluative process. In peer-reviewed communities, peer review determines if an idea is “good.” In agencies such as the Defense Advanced Research Projects Agency (DARPA), program managers exert much influence in deciding if an idea is good.
Several federal agencies have an important role to play in the cybersecurity research agenda. For two reasons, the committee does not make specific recommendations for which agencies should pursue which specific research topics. First, many of the topics described might well fit into the agendas of multiple agencies. Second and at the same time, the different agencies have different needs—especially mission-oriented agencies. However, the committee does urge that federal decision makers take into account historical strengths and missions of the various departments.
For example, the Department of Energy (DOE) is a logical place to support cybersecurity research efforts that relate to Supervisory Control and Data Acquisition (SCADA) systems, as such systems are an essential element of the electric grid, for which the DOE has much oversight responsibility. The National Institute of Standards and Technology (NIST) and National Security Agency (NSA) have historically undertaken substantial research efforts in cryptography and other security technologies and have developed strengths that should be leveraged in future research to the extent that it can be done on an unclassified basis. With historical efforts in metrology, NIST is also a natural place to focus research on cybersecurity metrics. DARPA has historically conducted substantial research on system-building, and all of the Department of Defense (DOD)—as well as much of the nondefense government portfolio and civilian work—would benefit substantially from advances in secure system building, as discussed in Appendix B (Section B.6.4.2). And, given its investigator-driven focus, the National Science Foundation (NSF) is the obvious agency to develop and sustain a broad national research portfolio.
Different agencies also support different kinds of research communities. For example, NSF tends toward smaller grants for individuals or small teams, with fewer and less specific deliverables. Historically, DARPA has built communities and encouraged large grants to address very hard problems, although recent management changes and policies have begun to change such practices. Diversity in the character of research communities is also to be encouraged, because it is hard to predict what styles of research will result in progress.
As for the magnitude of the budget needed to sustain the committee’s principle, the committee notes that for the foreseeable future the cybersecurity threat will only grow. First, the threat is likely to grow at a rate faster than the present federal cybersecurity research program will enable us to respond, and the consequences of failing to provide an adequate response could be quite damaging to the nation.
Second, the PITAC report implicitly enunciated a principle for funding cybersecurity research that the committee finds eminently sensible: most good research ideas should be supported and that proposals based on such ideas should be supported at or near the levels requested.13
For these reasons, the committee concludes in general terms that both the scope and scale of federally funded cybersecurity research are seriously inadequate. To execute fully the broad strategy articulated in this report, a substantial increase in federal budgetary resources devoted to cybersecurity research will be needed.
To provide some characteristic orders of magnitude for this discussion, the committee notes that the scale of today’s cybersecurity research budgets is probably somewhat larger than $160 million annually. This estimate is based on the PITAC estimate for federally supported cybersecurity research in fiscal year (FY) 2004, both classified and unclassified, of about $160 million. Although the committee was unable to find data to support a similar estimate for FY 2005 or FY 2006, it also knows of no significant change in the budget, a point suggesting that “a little more than the FY 2004” is not an unreasonable guess. (The breakdown of the total $160 million between classified and unclassified research is unknown, although it is obvious that amounts supporting classified research are not accessible to the broad cybersecurity research community at large.)
As a point of comparison, the committee notes a Gartner Group estimate that financial losses stemming from phishing attacks alone exceeded
$2.8 billion in 2006.14 The reason that such losses are not more visible is that they are usually absorbed as a “tax” on purchases (that vendors pass along to customers), and they are distributed as small losses and productivity losses over the population. Thus, no one party suffers a huge loss (generally) that shows up in reports. But the overall expense is large.
Another point of comparison is the 2005 FBI Computer Crime Survey, which estimated the cost of “computer security incidents” in the 12-month period from mid-2004 to mid-2005 at $67.2 billion to U.S. organizations.15 (The raw data for this survey were provided by 2,066 organizations on a self-reported basis, and the $67.2 billion aggregate figure is extrapolated.) It is hard to know how seriously to take this specific figure, which amounts to 0.5 percent of the U.S. gross national product; although statistics on the amount lost to cybercrime are generally of dubious reliability, there is no doubt that aggregate losses are considerable.
The committee does not mean to imply that the dollars that could be saved through better cybersecurity should somehow subsidize a research effort. Yet it is not unreasonable to suggest that the magnitude of such losses should have some bearing on the efforts devoted to cybersecurity research.
Fiscal reality today dictates that discretionary budgets for the foreseeable future will be very tight, if not declining in absolute terms. In the current budget environment, is it “realistic” to recommend budget increases in a program or in a national portfolio?
It is a truism that growth in the budget of any given program comes from one of two sources—an explicit decision to support it with additional appropriations without a corresponding offset somewhere else in the budget, or an explicit decision to increase the program’s budget while at the same time decreasing the budget of one or more other programs. But it is also true that no matter how tight budgets are in any given year, some programs grow, others shrink, and still others start anew while others terminate. Thus, growth in existing programs or new program starts reflect political will and a judgment regarding the benefits of such programs relative to other programs.
The committee also makes three caveats about additional funding. First, policy makers should regard cybersecurity research as a continuing and ongoing need that will extend for the foreseeable future. As long as information technology continues to enable economic innovation and to
Gartner Press Release, “Gartner Says Number of Phishing E-Mails Sent to U.S. Adults Nearly Doubles in Just Two Years,” November 9, 2006; available at http://www.gartner.com/it/page.jsp?id=498245.
be a pillar of prosperity, cybersecurity cannot be seen as a discrete problem to be solved once and for all, but rather as a class of problems that will continuously evolve as new technology and new threats continue to present new issues. As a result, a funding model calling for a one-time increase in cybersecurity research, even a substantial one over multiple fiscal years, is less relevant than one that continues to enable a large fraction of good ideas to be supported in the long term.
Second, additional funding should really be “new money” rather than “relabeled” money or money taken from other computer science research. In the words of the PITAC report, for instance:
[T]he increase in the NSF CISE budget for civilian cyber security fundamental research [should] not be funded at the expense of other parts of the CISE Directorate…. Significant shifts of funding within CISE towards cyber security would exacerbate the strain on these other programs without addressing the existing disparity between CISE and other directorates. Moreover, much work in “other” CISE areas is beneficial to cybersecurity and thus reductions in those other areas would be counterproductive. [For example,] theoretical computer science underpins much encryption research, both in identifying weaknesses and in advancing the state of the art. Algorithms research helps ensure that protocols designed for security can be efficiently implemented. Programming language research can help address security at a higher level of abstraction and can add functionalities such as security assurances to software. Software engineering can help eliminate software bugs that are often exploited as security holes. And new computer architectures might enforce protection faster and at finer granularity.
Nor should cybersecurity research remain in the computer science domain alone. Additional funding might well be used to support the pursuit of cybersecurity considerations in other closely related research endeavors, such as those related to creating high-assurance systems and the engineering of secure systems across entire system life cycles (see the discussion in Section 4.3).
Third, funding should be increased only at a rate consistent with the pace at which qualified researches are trained or move into the field from other branches of computer science. “Boom-and-bust” cycles often do harm to a field, especially when they lead to unwise expenditures.
Item 3: Establish a mechanism for continuing follow-up on a research agenda.
Management of the complete cybersecurity research portfolio across the federal government requires that government decision makers have
a reasonably fine-grained understanding of the scope and nature of that portfolio. However, to the committee’s knowledge, a picture that is both adequately detailed and sufficiently comprehensive does not exist today. To take just one example, the President’s Information Technology Advisory Committee was able to determine the DARPA investment in cybersecurity research and development (R&D) for FY 2004 only within a factor of about four (that is, PITAC determined that figure to be between $40 million and $150 million).
The National Coordination Office (NCO) for Networking and Information Technology Research and Development (NITRD), which supports the planning, budget, and assessment activities of the federal government’s NITRD program, tracks the unclassified portion of the cybersecurity research and development portfolio. This portfolio, which accounts for about $175 million in the administration’s FY 2007 request, is focused on research and advanced development to prevent, resist, detect, respond to, and/or recover from actions that compromise or threaten to compromise the availability, integrity, or confidentiality of computer-based systems. The NCO supports the Interagency Working Group on Cyber Security and Information Assurance (CSIA IWG), which coordinates programs, budgets, and policy recommendations for CSIA R&D.16
The NITRD coordination process is an important first step toward creating the picture that is needed for adequate management of the federal cybersecurity research portfolio. Nevertheless, it could be strengthened in a number of important ways:
Distinguishing clearly between research and development. As presented, the NITRD figures aggregate research and development. Because development efforts are most often focused on short-term deliverables, aggregating research and development does not provide a clear indication of effort devoted to longer-term goals.
Including classified research and development in the big picture. The mere fact that research and development may be conducted under
classified auspices does not mean that such efforts produce no knowledge of value outside the military, diplomatic, and intelligence communities. It may mean, for example, that researchers and developers may have been asked to conduct their work in the context of specific problems whose details are classified. Thus, classified work is at least potentially relevant to the nation’s broad efforts to secure cyberspace. (Note that this notion does not suggest that the detailed spending figures for classified cybersecurity research should be made public or broadly available—but policy makers in both the executive and legislative branches [e.g., in the Office of Management and Budget and in the relevant congressional committees] should have access to the “big picture” of cybersecurity research.)
Disaggregating (and publishing) government-wide budget figures associated with different areas of focus. Individual agencies will often group the contracts and grants they support into broader categories (Box 10.1 presents an exemplary approach). But the major weakness in these agency efforts is that they are not comparable across agencies. That is, any relationship between the categories of one agency and another agency is due mostly to chance. Establishing some common categories (and providing multiple crosswalks among them) that would be relevant across agencies would provide a more informative picture.
Tracking budget figures from year to year. The picture of federal cybersecurity research efforts evolves over time. Thus, efforts must be made to provide comparable analyses from year to year if the time evolution is to be understood.
Note also that the comparability of budget figures in different categories across agencies depends largely on a small number of analysts who are knowledgeable about the subject matter doing the mapping from individual awards to budget categories for all of the agencies involved. The small number is essential, because otherwise an agency is likely to task an individual analyst to do this work for that agency, and this person will use different criteria and judgments for mapping than those that the analyst for a different agency would use. For similar reasons, it is important for the same analysts to do the categorizations from year to year, since doing so will enhance the year-to-year comparability of the resulting figures.
Greater transparency into federal support for cybersecurity research would enable decision makers at all levels of responsibility, and in particular the program managers with direct responsibility for the execution of programmatic responsibilities regarding research, to understand the
A Model Categorization for Understanding Budgets
The National Science Foundation (NSF) overview of the fiscal year 2004 awards for the Cyber Trust program and related awards included several substantive categorizations for the same awards, including the following:
The NSF provided multiple categorizations, noting on the Web site (see the source in this box) that “most research projects have several dimensions, such as the expected time to yield results, where the project lies on scales ranging from empirical to theoretical work, from foundational to applied, and across domains and disciplines of study. Any attempt to group projects into categories will consequently succeed better for some than for others.” Accordingly, NSF presents multiple categorizations that constitute a framework for relating projects to each other and that provide an overall picture of the program.
big picture of federal activities in this area. One benefit is that program managers would be able to identify more easily excessive redundancy in research.17 A second benefit is that transparency would facilitate greater
scrutiny of research projects by the cybersecurity community at large—scrutiny that might help to terminate projects that were clearly going down the wrong path.18
Item 4: Support infrastructure for cybersecurity research.
Making progress on any cybersecurity research agenda requires substantial attention to infrastructural issues. In this context, a cybersecurity research infrastructure refers to the collection of open testbeds, tools, data sets, and other things that enable research to progress and allow research results to be implemented in actual IT products and services. Without an adequate infrastructure, there is little hope for realizing the full potential of any research agenda.
The reason is that cybersecurity is a systems and an operational issue. For example, realistic testbeds are needed for demonstrating or validating the operational utility of new cybersecurity technologies. Realistic data sets of sufficient size, realism, and currency are similarly needed for security analysts to understand and characterize the various attacks against which they are defending (while keeping in mind that future attacks may not resemble past attacks).
An infrastructure for cybersecurity research provides invaluable assistance in new ideas at a reasonable scale, in the wild, with real users; insight into appropriate paths to the “tipping point” (the point of acceptance of an innovation after which the entire community feels that it no longer makes sense to refuse to accept it); and ways of exploring the achievement of fundamental change through incremental strategies that do not require all Internet users and all their vendors to change before benefit is realized.
Consider, for example, the need for cybersecurity testbeds. Because a large part of the cybersecurity problem involves the rapid propagation of viruses and worms throughout the Internet, a realistic testbed for testing defenses is necessary. In this context, “realistic” means one of sufficient size and appropriate configuration to be in some sense representative of the Internet as a whole. A testbed enables defenses against viruses and
worms to be tested under relatively controlled conditions. Propagation speed, destructiveness, and virulence of an attack can be evaluated in a safe environment (i.e., without consequences for the larger Internet). Most importantly, a testbed can be instrumented quite thoroughly so that the detailed mechanisms of an attack can be better understood. (An example of a cybersecurity testbed is the Cyber Defense Technology Experimental Research [DETER], a joint project of the University of California at Berkeley; the University of Southern California’s Information Sciences Institute [USC-ISI]; and McAfee Associates. The DETER network was launched in late 2003 under a 3-year grant from the NSF in cooperation with the DHS.)
Cybersecurity testbeds also include research platforms. A good example of a research platform serving as a testbed is Multics, which served as the focal point for the exploration and demonstration of new ideas over several generations of researchers.19
A cybersecurity research infrastructure also includes large-scale data sets that allow researchers to accurately represent certain kinds of attacks flowing across the Internet. In the absence of such large-scale data sets, which ought to be open to any legitimate cybersecurity researcher, the efficacy of a solution may be based on nonrepresentative situations or attacks. An example of an effort to make such data available to the cybersecurity research community is the DHS-sponsored Protected Repository for the Defense of Infrastructure against Cyber Threats (PREDICT) initiative. PREDICT provides cybersecurity developers and evaluators with high-quality, regularly updated, network operations data sources that provide timely and detailed insight into cyberattack phenomena occurring across the Internet, and in some cases will reveal the effects of these attacks on networks that are owned or managed by the data producers.
Item 5: Sustain and grow the human resource base.
Human capital is a particularly important concern for cybersecurity, since people are the originators of new ideas. Recommendation 2 of the PITAC report Cyber Security: A Crisis of Prioritization dealt directly with this point. That recommendation stated:
[T]he Federal government should intensify its efforts to promote recruitment and retention of cyber security researchers and students at research universities, with a goal of at least doubling the size of the
Multics (Multiplexed Information and Computing Service) was a mainframe time-sharing operating system begun in 1965 and used until 2000. More information on Multics can be found at http://www.multicians.org/.
civilian cyber security fundamental research community by the end of the decade. In particular, the Federal government should increase and stabilize the funding for fundamental research in civilian cyber security, and should support programs that enable researchers to move into cyber security research from other fields.
The reasoning underlying this recommendation was, and remains, sound. Today, cybersecurity research is not a broad-based effort that engages a substantial fraction of the computer science research community. For example, only a small fraction of the nation’s graduating doctoral students in IT specialize in cybersecurity, only a few professors conduct research in cybersecurity, and only a few universities support research programs in these fields.
The committee aligns itself with the spirit of this recommendation, if not necessarily its specific scale. In times of crisis, calls for new technology usually invoke the memory of the Manhattan Project to build the atomic bomb. But the need to build human capital for the cybersecurity field suggests that it is not the Manhattan Project that provides the right metaphor, but rather the national response to Sputnik. The Manhattan Project resulted in the deployment of hardware—whereas a primary result of Sputnik was the National Defense Education Act, which focused attention on and generated substantially greater support for increasing science and mathematics education. Analogously, the committee believes that increasing human capital for cybersecurity ought to be an essential part of the national response to the cybersecurity problem.
Consider, then, two key dimensions of the human capital issue in cybersecurity research addressed in the following subsections.
Enlarging the Pool of Researchers
Universities are the primary source of human capital—and graduate study is essentially the only source for the researchers of the future. For a field in which new ideas are always needed (and in light of the increasing sophistication of cybersecurity threats), growing the supply of such researchers and exploiting the power of many minds at work are critical for success and essential if we are to have even a remote hope of staying ahead of the curve, or even keeping pace with it.
There are only two strategies for increasing the number of researchers—training new entrants to specialize in the field (that is, graduate students) and enticing already-established researchers in other fields to join the field. Either strategy depends on demonstrating to these prospective new researchers that—in addition to important and interesting intellectual problems—there is a future to working in the field, a point suggesting
the importance of research support for the field that is both adequate and stable. Regarding adequacy—increasing the number of researchers in a field necessarily entails increased support for that field, and no amount of prioritization within a fixed budget will result in significant growth in that number. Regarding stability—stable or growing levels of funding act as a signal to potential graduate students about the importance of the field and, by implication, the potential for professional advancement.
Avoiding negative signals to prospective researchers is also important. For example, given the uncertainties of research, funding models for individual research contracts or grants that demand short-term deliverables and that include go/no-go decisions reduce the number of qualified individuals who regard that research field as being worth a career commitment. They also bias the conduct and scope of the research effort. Research that cannot be published or otherwise disseminated is also an inhibitor, given that the potential for recognition by one’s peers—whatever the form—is a powerful motivator for many researchers and indeed a career enhancer for those in academia.
Yet another issue is that of making the broadest possible use of available talent. One aspect of such talent is graduate student labor, upon which much of university research is based. Graduate students, who work under the supervision of faculty members, are nevertheless expected to make original contributions to knowledge in their specialties. When the federal government places restrictions on the research work that foreign graduate students can perform, it reduces the pool of talent available to further the research agenda—and given that foreign graduate students constitute a significant fraction of the graduate student population, it diminishes the talent pool significantly.
A second aspect of the talent issue is that of the participation of females and non-Asian minorities in advanced IT education. Apart from issues of simple equity, enhancing diversity in intellectual backgrounds and personal histories of the cybersecurity research workforce is likely to expand the range of approaches proposed and taken to address unsolved problems, an outcome that may well lead to more rapid progress. Moreover, anecdotal evidence from some cybersecurity researchers suggests that a higher percentage of these underrepresented students are involved in cybersecurity research than in other subspecialties within computer science.
Enhancing Cybersecurity Knowledge and Awareness in the Future IT Workforce
A number of government efforts to promote the education of security specialists focus on teaching specialists about current technologies, organizational management, and best practices with current products and
services. Such efforts are useful, but they do not speak to development of a cadre of computer scientists and engineers and IT leaders that will focus on how to make the next generation of products and services more secure.
Today, designers and developers of IT products and services are often not schooled in what it means to design and develop with cybersecurity in mind. Software engineering has not traditionally been conceptualized or practiced with an assumption that there was an active adversary. But now designers and developers must approach their tasks under the assumption that every line of code may someday be attacked. The use of threat-based design and development is a shift in the development of IT products. Education must be seriously revamped if this shift is to take place on a large scale.
Put differently, in the long run, security will require the integration of a cybersecurity perspective in virtually every IT course, with the goal of promoting a security culture throughout the masses of systems designers, developers, and systems administrators and not just in cybersecurity researchers. That is, every software and hardware course of study should integrate the research results from the study of security requirements, architectures, and tools with an eye toward training future IT workers—not just future security experts, but also every IT practitioner, researcher, educator, systems administrator, computer designer, and programmer.
Consider what such revamping of mind-set might mean in the IT life cycle.
Whereas the old mind-set in hardware and software design focused on performance and functionality, respectively, the new mind-set should also focus equally on security and attack resilience. As an example, current software engineering education stresses some form of object reuse, generalization of interfaces, and modularization, but it does not address the security implications of such features. The various parts of a program that reuse an object may have different security expectations, generalized interfaces may expose too much “attack surface,” and modularization itself has the side effect of creating accessible interfaces.
Whereas security was implemented as an afterthought in previous computer designs, it should be an integral part of the initial designs for future secure and attack-resilient computer architectures, and it should be integrated into every aspect of the hardware and software design life cycles and research agendas.
Whereas in the old mind-set, design principles help primarily to critique a system after the design has been completed, the new mind-set calls for clear examples of design that demonstrate how such principles can be incorporated into new designs.
Whereas the response to security breaches was reactive in the old mind-set (e.g., the “patch and pray” approach, with vendors supplying software patches after vulnerabilities are identified or their products are attacked), it should be proactive in the anticipation of new types of attacks in the new mind-set.
Whereas many security products implemented only perimeter security (e.g., firewalls) in the old mind-set, the new mind-set would emphasize pervasive fine-grained authorization. For example, secure computer architecture would include security features in the processor architecture, the hardware platform architecture, the operating system kernel, and the networking protocols; each of these components would be designed and implemented with considerable thought being given to security products.
Whereas the old mind-set dealt with fault-tolerance, or the resistance to physical aging, deterioration, and transient faults, the new mind-set must also deal with very intelligent (human) attackers and malicious programs (malware). For example, current software engineering education does not emphasize that inputs to a program affecting program flow must always be checked for validity before it is passed to the program, even when data are made available at internal interfaces to program components. Every operation must be considered from the standpoint of how it can be spoofed, tampered with, replaced, or locked up.
Whereas in the old mind-set, there was time to deal with a security breach, the new mind-set needs to also consider malware such as future viruses and worms that can infect all computers on the Internet in a few seconds. Hence, responses at human operator timescales are woefully inadequate, and more autonomic responses should be researched, and deployed if promising.
Whereas in the old mind-set, security was treated as mainly a software issue, the new mind-set should consider both hardware and software dimensions of a solution.
Whereas in the old mind-set, security experts operated in separate domains such as cryptography, network security, operating system security, and software vulnerabilities, the new mind-set should emphasize the integration of these separate areas, cross-pollination of ideas, and working toward the best system solution, given security, performance, cost, and usability goals.
Whereas in the old mind-set, students are primarily indoctrinated in the importance of correct design and implementation, the new mind-set gives equal emphasis to notions of defensive design and implementation in which the expectation is that programs must deal with user mistakes and malicious adversaries.
Whereas in the old mind-set, a system is considered secure until demonstrated otherwise by a practical attack, the new mind-set suggests that a system should be regarded as insecure until there is evidence that suggests its resistance to attack.
These comments are not intended to suggest that every designer and developer of IT products, services, and applications must become a security specialist as well. Many of today’s security specialists argue, with considerable force and persuasiveness, that security is hard, that only a few folks can get it right, and that if security has to be addressed over and over again in every application, the likely result will be myriad insecure applications. Other parts of this report have suggested that security functionality can be made easier to use (e.g., Section 6.1, Section 184.108.40.206). But the argument for changing the security mind-set across all designers and developers is just that—to create a mind-set that appreciates and acknowledges the value of security and enables the designers and developers to engage in productive and meaningful interaction and dialogue with security specialists in the course of their work.
Also important is eliminating the intellectual mind-set that characterizes many graduates of today’s IT educational programs—a “cowboy” mentality antithetical to the disciplined and structured approach needed to design and develop secure systems. In the not-so-distant past, it was fairly routine for the pressure of bringing products and service to market quickly to take precedence over all other considerations, including security. While this mind-set has begun to change, and vendors are realizing that paying attention to security is likely to have some impact on their bottom line, the committee strongly believes that there is a long way to go before a disciplined and structured development effort is routine in all vendors.
In the short run, organizations will adopt this approach if it enables them to ship a security-acceptable product more quickly or cheaply, and they will train their programmers in-house. But in the long run, it is clear that the educational system will—and should—bear most of the burden of integrating security as an important educational element in almost every IT course. This will call for treating security as a co-equal to functionality and performance in most subjects.
The committee believes that those responsible for educating the future IT workforce must work with cybersecurity researchers if the integration of such a perspective is to occur. If a cybersecurity perspective is to become pervasive throughout the IT workforce, it will require a much larger number of faculty specializing in cybersecurity research. The number of such faculty, in turn, is a direct function of the sustained research support available, even acknowledging that not all teaching faculty are research faculty or vice versa.
The direct relationship between faculty size and research support is
particularly important if and when departments are contracting. In such times, it is difficult to obtain slots for any subspecialty, and especially so if—as is the case with the cybersecurity specialization—there is not a critical mass of those faculty members already in the department. Thus, targeted funding to support the cybersecurity specialization would be particularly important if the number of such faculty is to grow.
Support for infrastructure is also needed for cybersecurity education. Developing cybersecurity expertise requires hands-on experience with security products, so that their capabilities and limitations can be understood and intuitions developed for when they are or are not helpful. Such infrastructure is often neglected in funding programs, and those that do exist are limited in time, amounts, and schools.
The primary purpose of this report is to formulate a cybersecurity research agenda. But the scope and the nature of this agenda are inextricably intertwined with the character of the threat to cyberspace. Accordingly, this report argues that the threat to cybersecurity is real, significant, and growing rapidly. But because the combination of adversary threats and technical or procedural vulnerabilities of the future is impossible to predict in anything but the most general terms, a broad cybersecurity research agenda (Section 3.4.4, Principle 4: Respect the need for breadth in the research agenda.) is necessary to develop new knowledge that can be used to strengthen defenses against the cyberattacks of tomorrow. Furthermore, the research agenda must examine both technical and nontechnical issues. There is of course a central role to be played by technologists—but they must work hand in hand with organizational specialists, psychologists, anthropologists, sociologists, manufacturing specialists, and many others if the desired outcome—systems that are more secure in the real world—is to be achieved.
In Section 10.2, the committee identified five action items for the nation’s policy makers: creating a sense of urgency about the cybersecurity problem commensurate with the risks, supporting a robust and sustained research agenda at levels which ensure that a large fraction of good ideas for cybersecurity research can be explored, establishing a mechanism for continuing follow-up on a research agenda, supporting the infrastructure needed for cybersecurity research, and sustaining and growing the human resource base. If these items are successfully addressed, real progress can be made toward realizing a more secure cyberspace and toward making the Cybersecurity Bill of Rights more a reality than a vision.