National Academies Press: OpenBook

Toward a Safer and More Secure Cyberspace (2007)

Chapter: 8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas

« Previous: 7 Category 4 - Deterring Would-Be Attackers and Penalizing Attackers
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

8
Category 5—Illustrative Crosscutting Problem-Focused Research Areas

While Chapters 4, 5, 6, and 7 address specific focus areas, this chapter presents a number of problems whose solutions will involve research described in all of those chapters.

8.1
SECURITY FOR LEGACY SYSTEMS

Organizations make large investments in making systems work properly for their business needs. If system deployment is complex or widespread, many organizations are highly reluctant to move to systems based on newer or more current technologies because of the (often quite considerable) work that would inevitably be required to get the new systems to work as well as the older systems worked. However, because legacy systems—by definition—embody design and architectural decisions made before the emergence of the current threat environment, they pose special challenges for security. That is, when new and unanticipated threats emerge, legacy systems must be retrofitted to improve security—and this is true even when careful design and attention to security have reduced the number of potential security vulnerabilities in the original legacy system.

In this context, the challenge is to add security without making existing software products, information assets, and hardware devices any more obsolete than is necessary. Research to support this goal has three components:

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
  1. Research is needed to address the relatively immediate security needs of legacy systems, as these systems will be with us for a long time to come.

  2. It is worthwhile to expend some significant effort to create new systems and networks that are explicitly designed to be secure, at least for critical systems whose compromise would have high consequences. Research on clean-slate designs for secure and attack-resilient architectures will show what can be achieved when these efforts are relieved of the need to fit into an insecure existing framework, and it may be that new design approaches will make it possible to achieve performance, cost, and security goals simultaneously.

  3. Research effort should be explicitly focused on easing the transition path for users of today’s information technology applications to migrate to secure-by-design systems in the future—a path that is likely to take years or decades to accomplish even after such “from-the-start secure” systems are designed and initially deployed. (Box 8.1 presents further discussion of this point.)

One key issue in the security of legacy systems is patch management. Tinkering with existing legacy systems—for whatever reason—can result in severe operational problems that take a great deal of time and effort to resolve, but fixing security problems almost always requires tinkering. Therefore, operational managers are often faced with choosing between the risk of installing a fix to some vulnerability (that is, the installation of the patch may disrupt operations or even introduce a new vulnerability) and the risk of not installing it (that is, attackers might be able to exploit the vulnerability). Further, the installation of a patch generally necessitates a set of new tests to ensure both that the vulnerability has been repaired and that critical operational functionality has not been lost. If it has been lost, a new cycle of patch-and-test is needed. These cycles are both costly and inherently time-consuming, and consequently many systems managers avoid them if at all possible. Such dilemmas are exacerbated by the fact that it is often the very release of a fix that prompts an attack.1

One area of research thus suggested is the development of a methodology that will help operational managers decide how to resolve this dilemma.

1

This paradoxical situation results from the fact that the release of a fix is publicized so that it can be disseminated as widely as possible. The publicity about the fix can alert would-be attackers to the existence of the vulnerability in the first place, and the fix itself can usually be “disassembled” in order to reveal the nature of the original vulnerability. Because some installations will not install the fix, would-be attackers gain opportunities that would not otherwise become available.

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

BOX 8.1

Issues in System Migration

One important dimension of security for legacy systems involves strategies for migrating to systems that are more inherently secure. In this context, it is often the case that a migration strategy needs only to preserve existing assets. For example, a user may have a large investment in data files of a given format that are required for a given version of a program. A new version that is more inherently secure may well require files of a different format. One strategy to preserve assets may be to require the new version to open all files in the old format. A different strategy may call for a conversion utility to convert old files to the new format.

The first strategy might be deemed a requirement for backward compatibility—that is, the new system should operate as the old one did in a manner that is as transparent as possible to the user. But all too often, the requirement for full backward compatibility complicates the security problem—backward compatibility may, explicitly or implicitly, call for building in the same security vulnerabilities in an attempt to preserve the same functional behavior. (For example, a large fraction of the Windows XP system code base is included for backward compatibility with Windows 98 and Windows 2000—a fact that is well recognized as being responsible for many vulnerabilities in XP.)

In the second approach, the migration to a more secure system is made easier by the weaker requirement that only the data assets of the earlier generation be preserved (or made usable) for the new system. The duplication of all functional behavior is explicitly not a requirement for this approach, although it remains a significant intellectual challenge to determine what functional behavior must and must not carry over to the new system.

Another fact about system migration is that with distributed systems in place, it is very difficult, from both a cost and a deployment standpoint, to replace all the legacy equipment at once. This means that for practical purposes, an organization may well be operating with a heterogeneous information technology environment—which means that the parts that have not been replaced are likely still vulnerable, and their interconnection to the parts that have been replaced may make even the new components vulnerable. The result of this tension is often that no meaningful action for security improvement takes place.

A second area of research relevant to the security of legacy systems is that of program understanding. Program-understanding tools are essential for addressing security issues that arise in legacy systems for which documentation is poor and original expertise is scarce. The reason is that legacy systems continue to play essential operational roles long after their technological foundations are obsolete and after the departure of the individuals who best understand the systems. But as new security issues arise in these legacy systems, a detailed understanding of their internal operation and of how actual system behavior differs from intended behavior is necessary in order to address these issues. Tools that help new analysts

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

understand flows of control and data can facilitate such understanding and the “reverse-engineering” of legacy systems.

8.2
THE ROLE OF SECRECY IN CYBERDEFENSE

Should the inner operations of security mechanisms be kept secret or not? It is widely assumed in much of the unclassified research community— especially the community associated with open-source software—that the correct answer to this question is “No.” This answer is based on the idea that secrecy prevents the security community from examining the mechanism in question and in so doing eliminates the opportunity for a rigorous peer review (e.g., finding flaws in results, verifying results independently, and providing [open] building blocks that others can build on [thereby fostering research progress]).2 There is a further belief in this community that a weak system can usually be compromised without knowledge of what is purportedly secret.3

In the classified cybersecurity community, the opposite view is much more prominent. In this view, secrecy of mechanism throws up an additional barrier that an adversary must penetrate or circumvent in order to mount a successful attack, but in no event is secrecy the only or even the primary barrier that should be established. Vendors, even of products for civilian use, also have an interest in keeping implementations secret (under existing trade secret law).

Both points of view have merit under some circumstances, and a number of researchers have sought to reconcile them. For example, Spafford argued in 1996 that unless an exploit is actually being used in a widespread manner, it is better not to publish details of a flaw, because to do so would result in a much larger risk of exposure.4 This is true even if a fix is available, since the mere availability of a fix does not guarantee—nor even nearly guarantee—that the fix will be installed. Some will not hear of the fix; some will not be able to install it because of certification requirements; some will not have the expertise to install it; some will fear the subsequent breakage of some essential element of system functionality. More recently, Swire has argued that secrecy is most useful to the defense on the first

2

Spafford goes so far as to argue that open-source development is an issue that is orthogonal to security. See http://homes.cerias.purdue.edu/~spaf/openvsclosed.html.

3

A related argument applies to data and history. Whether data and development history are protected by national security classifications or trade secrets, their unavailability to the community at large prevents the community from using that data and history to understand why systems fail or the origins of a particular kind of bug or flaw.

4

Eugene Spafford, “Cost Benefit Analyses and Best Practices,” Practical Unix and Internet Security, Simson Garfinkel and Eugene Spafford (eds.), O’Reilly Press, Cambridge, Mass., 2003.

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

occasion of an attack on a computer system but that it is far less effective if an adversary can probe the defenses repeatedly and learn from those probes.5 The National Research Council itself commented on this tension in 1998 (Box 8.2). Additional research should be done to shed more light on appropriate uses of secrecy in cybersecurity.

Presuming that there are some circumstances in which secrecy is an asset to cyberdefense, an additional research question arises: To what extent is it possible to keep any mechanism secret when it is widely deployed? What technological approaches can be used to increase the likelihood that a widely deployed mechanism can be kept secret?

8.3
INSIDER THREATS

The majority of cybersecurity research efforts are focused on making it more difficult for “outside” adversaries to compromise information systems. But, as the cases of Robert Hanssen and Aldrich Ames suggest, insiders can pose a considerable security risk as well. Indeed, much of the past 10 to 15 years of U.S. counterintelligence history suggests that the threat to national security emanating from the trusted insider is at least as serious as the threat from the outsider.6 Insiders can be in a position to do more harm to services and resources to which they have authorized access than can outsiders lacking such access; these concerns are particularly important in contexts in which safe operation depends on good decisions being made by systems operators. Insiders can also leverage their authorized access to obtain information to extend their access.

The compromised insider presents a more difficult security challenge than that posed by hostile outsiders. The first rule about security is to keep hostile parties away, and the insider, by definition, has bypassed many of the barriers erected to keep him or her away. Moreover, a compromised insider may work with outsiders (e.g., passing along information that identifies weak points in an organization’s cybersecurity posture).

Compromised insiders fall into two categories—knowing and unknowing. Knowingly compromised insiders—those that know they are

5

Peter Swire, “A Model for When Disclosure Helps Security: What Is Different About Computer and Network Security?,” Journal on Telecommunications and High Technology Law, Vol. 2, 2004.

6

For this report, the term “insider” is used to denote an individual in an authorized position whose actions can materially affect the operation of the information technology systems and networks associated with critical infrastructure in a negative way. Since not all “insiders” pose a threat, the terms “inappropriately trusted insider” or “compromised insider” are used to mean an insider with the willingness and motivation to act improperly with respect to critical infrastructure. The term “outsider” refers to an individual who is not in the position of an “insider.”

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

BOX 8.2

Secrecy of Design

Secrecy of design is often deprecated with the phrase “security through obscurity,” and one often hears arguments that security-critical systems or elements should be developed in an open environment that encourages peer review by the general community. Evidence is readily available about systems that were developed in secret only to be reverse-engineered and to have their details published on the Internet and their flaws pointed out for all to see. But open-source software has often contained security flaws that have remained for years as well.1

The argument for open development rests on certain assumptions, including these: the open community will have individuals with the necessary tools and expertise, they will devote adequate effort to locate vulnerabilities, they will come forth with vulnerabilities that they find, and vulnerabilities, once discovered, can be closed—even after the system is deployed.

There are environments, such as military and diplomatic settings, in which these assumptions do not necessarily hold. Groups interested in finding vulnerabilities here will mount long-term and well-funded analysis efforts—efforts that are likely to dwarf those that might be launched by individuals or organizations in the open community. Further, these well-funded groups will take great care to ensure that any vulnerabilities they discover are kept secret, so that they may be exploited (in secret) for as long as possible.

Special problems arise when partial public knowledge about the nature of the security mechanisms is necessary, such as when a military security module is designed for integration into commercial off-the-shelf equipment. Residual vulnerabilities are inevitable, and the discovery and publication of even one such vulnerability may, in certain circumstances, render the system defenseless. It is, in general, not sufficient to protect only the exact nature of a vulnerability. The precursor information from which the vulnerability could be readily discovered must also be protected, and that requires an exactness of judgment not often found in group endeavors. When public knowledge of aspects of a military system is required, the

acting on behalf of an adversary—are most likely associated with a high-end threat, such as a hostile major nation-state, and their motivations also vary widely and include the desire for recognition for hacking skills, ideological convictions, and monetary incentives. Knowingly compromised insiders may become compromised because of bribery, blackmail, ideological or psychological predisposition, or successful infiltration, among other reasons. By contrast, unknowingly compromised insiders are those that are the victims of manipulation and social engineering. In essence, unknowingly compromised insiders are tricked into using their special knowledge and position to assist an adversary.

Regarding the knowingly compromised insider, a substantial body of experience suggests that it ranges from very difficult to impossible to identify with reasonable reliability and precision individuals who will

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

most prudent course is to conduct the entire development process under cover of secrecy. Only after the entire assurance and evaluation process has been completed—and the known residual vulnerabilities identified—should a decision be made about what portions of the system description are safe to release.

Any imposition of secrecy, about either part or all of the design, carries two risks: that a residual vulnerability could have been discovered by a friendly peer reviewer in time to be fixed, and that the secret parts of the system will be reverse-engineered and made public, leading to the further discovery, publication, and exploitation of vulnerabilities. The first risk has historically been mitigated by devoting substantial resources to analysis and assurance. (Evaluation efforts that exceed the design effort by an order of magnitude or more are not unheard of in certain environments.) The second risk is addressed with a combination of technology aimed at defeating reverse-engineering and strict procedural controls on the storage, transport, and use of the devices in question. These controls are difficult to impose in a military environment and effectively impossible in a commercial or consumer one.

Finally, there is sometimes a tension between security and exploitation that arises in government. Intelligence agencies have a stake in concealing vulnerabilities that they discover in systems that an adversary uses, because disclosure of such a vulnerability may lead the adversary to fix it and thus render it useless for intelligence-gathering purposes. If the vulnerability also affects “friendly” systems, a conflict arises about whether the benefits of exploitation do or do not outweigh the benefits of disclosure.

  

1See for example, Steve Lodin, Bryn Dole, and Eugene H. Spafford, “Misplaced Trust: Kerberos 4 Random Session Keys,” Proceedings of Internet Society Symposium on Network and Distributed System Security, pp. 60-70, February 1997.

SOURCE: Adapted largely from National Research Council, Trust in Cyberspace, National Academy Press, Washington, D.C., 1998.

actually take hostile actions on the basis of their profiles or personal histories. (For example, it is often hard to distinguish merely quirky employees from potentially dangerous individuals, and there is considerable anecdotal evidence that some system administrators have connections to the criminal hacker underground.) Thus, the identification of compromised insiders must rely on analyses of past and present behavior.7 (That is, it may be possible to infer intent and future behavior from usage signatures,

7

More precisely, the identification of a compromised insider depends first on identifying behavior or actions that are anomalous or improper, and then on associating an individual with that behavior or those actions. An intrusion-detection system typically flags anomalous behavior, and association of that behavior with an individual depends on higher-level systems issues, such as policies, radio-frequency indentification proximity sensors to autolock machines, authenticated systems logs, and so on.

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

although the consequences of false positives here may be quite high.) In other words, it is highly unlikely that general means for detecting potential spies and saboteurs will be developed; therefore, barriers to particular acts are necessary instead.

The knowledge base about how to defend against compromised insiders is not extensive, at least by comparison with the literature on defending against “outsiders.” Still, there is general agreement that a multifaceted defensive strategy is more likely to succeed than is an approach based on any one element. Some of the relevant elements include the following:

  • Technology. Authentication and access control are two well-known technologies that can help to prevent an insider from doing damage. Strong authentication and access controls can be used together to ensure that only authorized individuals gain access to a system or a network and that these authorized individuals have only the set of access privileges to which they are entitled and no more. As noted in Section 6.5, tools to manage and implement access-control policies are an important area of relevant research; with such tools available to and used by systems administrators, the damage that can be caused by someone untrustworthy and unaccountable can be limited, even if he or she has improper access to certain system components.

    Forensic measures (Section 7.3) and MAD systems (Section 5.2) can also play an important role in deterring the hostile activity of a compromised insider. For example, audit trails can monitor and record access to online files containing sensitive information or execution of certain system functions, and contemporaneous analysis may help to detect hostile activity as it is happening. However, audit trails must be kept for all of the users of a system, and the volume of data generally preclude comprehensive analysis on a routine basis. Thus, automated audit trail analyzers could help to identify suspicious patterns of behavior that may indicate the presence of a compromised insider. In addition, it may be more or less important to audit the records of an individual, depending on the criticality of the resources available to that person; automated tools to decide on appropriate audit targets would be helpful to develop. Note also that maintaining extensive logs may in itself pose a security risk, as they may be used to help re-create otherwise confidential or classified material that is in otherwise restricted data files. For instance, keystroke logs may contain passwords or formulae, and logs of references consulted may be used to reverse-engineer

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

a secret process. Thus, logs may need to be protected to a level as high as (or higher than) anything else on the system.

  • Organizations. In an environment in which most employees are indeed trustworthy, what policies and practices can actually be implemented that will help to cope effectively with the insider threat? Known organizational principles to deal with a lack of trust include separation of duties and mandatory job rotation and vacations, and are often used in the financial industry. Such principles often generate specific technical security requirements that are often not considered explicitly in technical discussions of security. (For example, separation of duties requires that one person not play two roles—a fact that requires that an organization’s security architecture to enforce a single identity for an individual rather than multiple ones.) Research is needed in how to define, describe, manage, and manipulate security policies. Systems can be abused through both bad policy and bad enforcement. Tools are needed to make setting and enforcing policy easier. For example, a particularly useful area of investigation would be to gain a more complete understanding of what sophisticated and successful systems administrators do to protect their systems. Encapsulating that knowledge and codifying it somehow would provide insight into what the best kinds of defense are.

  • Management. Recent movements toward more-open architectures along with more collaboration and teamwork within and across institutions present management challenges. For example, certain information may be intended for distribution on a need-to-know basis, but given a shift toward more-collaborative exercises, determining who needs to know what and constraining the sharing of information to that end is difficult. In both business and government, there has been a significant movement toward embracing cooperation across organizations and sectors, but this, of course, introduces security problems.

  • Legal and ethical issues. Many privacy and workplace surveillance issues need to be addressed when an organization determines how to implement tools to decrease the possibility of insider malfeasance. For example, many anomaly-detection systems require the collection of large amounts of data about the activities of individuals in order to establish a baseline from which deviations might detect anomalous behavior.

    Both the fact of such collection and how those data are handled have serious privacy implications, from both a legal and an ethical standpoint. One of the most important of these issues is that it is all too easy for an organization to be both very security-aware and

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

employee-unfriendly at the same time. That is, even if draconian security measures are legal (and they may be of questionable legality), the result may be an environment in which employees feel that they are not trusted, with a concomitant lowering of morale and productivity and perhaps higher turnover. For example, an environment in which employees police one another for violations of security practice may breed distrust and unease among colleagues. Conversely, an environment that provides trusted mechanisms for dispute resolution and justice can promote a greater sense of camaraderie. The interplay between employment laws and the need for system security is also a concern. For example, the termination of suspected individuals may not occur immediately, and thus such people may maintain access while the necessary paperwork goes through channels.

Research is also needed to understand the circumstances under which an insider threat is (or is not) a concern serious enough to warrant substantial attention. Systems are often designed embedding unrealistic assumptions about insiders. For instance, it is common in networked enterprises to assume that one cannot and should not worry about insider attacks, meaning that nothing is done about insiders who might abuse the network. This approach leaves major security vulnerabilities in new networking paradigms in which individual user devices participate in the routing protocol. But in more traditional networking paradigms, individual user devices do not participate in the routing protocol, and thus this particular security vulnerability is of less concern.

As for the unknowingly compromised insider, effective defenses against trickery are very difficult to deploy.8 Adversaries who engage in such trickery are experts at exploiting the willingness of people to be helpful—a process often known as “social engineering.” These adversaries use people to provide inside information, and they use people by taking advantage of situations that cause breakdowns in normal procedures. In short, they help human error to occur.

For example, badges are often required for entry into a secure facility, and passwords are required to access the computer network. However, entry and access can often be obtained in the following manner: Walk up to the door carrying an armload of computers, parts, and dangling cords. Ask someone to hold the door open, and thank them. Carry the junk over to an empty cubicle, look for the password and log-in name that will be on

8

This discussion of social engineering is drawn largely from National Research Council, Information Technology for Counterterrorism: Immediate Actions and Future Possibilities, The National Academies Press, Washington, D.C., 2003.

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

a Post-it note somewhere, and log in. If you cannot log in, ask someone for help. As one guide for hackers puts it, just shout, “Does anyone remember the password for this terminal? … you would be surprised how many people will tell you.”9

The reason that social engineering succeeds is that, in general, people (e.g., employees of an organization) want to be helpful. It is important to counter social engineering if cybersecurity is to be achieved, but whatever that entails, the solution must not be based on extinguishing the tendencies of people to be helpful. The reason is that helpful people play a key role in getting any work done at all—and thus the research challenge is to develop effective techniques for countering social engineering that do not require wholesale attacks on tendencies to be helpful.

Some of the approaches described above for dealing with the knowingly compromised insider are relevant. For example, compartmentalization or a two-person rule might be useful in combating social engineering. But as a general principle, approaches based on deterrence will not work—simply because deterrence presumes that the party being deterred knows that he or she is taking an action that may result in a penalty, and most people who are trying to be helpful don’t expect to be punished for doing so.

8.4
SECURITY IN NONTRADITIONAL COMPUTING ENVIRONMENTS AND IN THE CONTEXT OF USE

As noted in Section 3.4.1.2, cybersecurity research that is situated in the context of use has a greater likelihood of being adopted to solve security problems that occur in that context. This section provides several illustrative examples.

8.4.1
Health Information Technology

Health-related information spans a broad range and includes the medical records of individual patients, laboratory tests, the published medical literature, treatment protocols, and drug interactions, as well as financial and billing records and other administrative information. The deficiencies relate to not having the relevant information (even though it may be available somewhere) at the right time and in the right place to support good decision making. The intensive use of information technology (IT) to acquire, manage, analyze, and disseminate health care information holds great potential for reducing or eliminating these information

9

See “The Complete Social Engineering FAQ”; available at http://morehouse.org/hin/blccrwl/hack/soceng.txt.

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

deficiencies, and a variety of reports clearly document the benefits of electronic medical records and computer-based clinical decision-support tools for health care workers.

At the same time, it is also broadly understood that ensuring the privacy and security of personal health-related information is a precondition for the widespread acceptance of health information technologies into clinical practice. Security requirements for such systems span a very large range, including both record-keeping systems and embedded systems that improve or enable the performance of many medical devices and procedures.

Security issues of special importance to health IT systems include the following:

  • Conditional confidentiality. In general, only pre-authorized individuals should have access to personal health information. However, in emergency situations in which the patient is unable to give explicit consent, medical personnel without previous authorization may need access.

  • Secure diagnostic and treatment systems. Medical technology (e.g., radiation devices for treating cancer, scanners, pacemakers) are increasingly controlled by computer. Software for these systems must be especially resistant to hostile compromise if their safety is to be ensured.

  • Usability. Health care providers are particularly sensitive to workplace demands that reduce the amount of time they can spend in actual patient care, and a matter of a few seconds of additional unproductive time per patient can mean the difference between an acceptable system and an unacceptable one. Security functionality, in particular, is notorious for wasting users’ time—and thus special attention to user needs in a health care environment is warranted.

  • Record integrity. Users and patients must be confident that the contents of a medical record are not altered undetectably and that data in transmission are not changed or corrupted.

  • Auditability. This function ensures that all medical interventions and diagnoses are recorded and associated with a responsible individual, and also that all parties viewing a record can subsequently be audited for having an appropriate need to know. Nonrepudiation is an essential part of auditability for ensuring that a responsible individual cannot plausibly deny responsibility for a decision.

In general, these security and privacy functions do not require technical advances beyond what is known today. Nevertheless, the integration of known security and privacy techniques with the particulars of a very

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

demanding health care environment is an exemplar of the importance of situated research and development.

8.4.2
The Electric Power Grid

The electric power grid is a national infrastructure that links generating stations through transmission lines and distribution lines to the customer loads. High-voltage transmission lines connected in a mesh network bring the power from generating stations to lower-voltage distribution lines that connect to customer loads in a radial topology. The ownership of these elements (generation, transmission, and distribution facilities) in a geographical area may not be shared—in many states, generation has been deregulated, meaning that generators compete with each other in power markets to sell their power.

The hundreds of organizations that own portions of the power grid, and the even more entities (vendors, contractors, market players, and so on) that interact with it, use very large numbers of computers. Some parts of the grid’s cyber-infrastructure operate, control, or otherwise directly or indirectly modify the workings of the grid.

The monitoring and control of the power grid are done by computerized control centers. The grid is divided into “control areas”; a control center monitors and controls that portion of the grid using a Supervisory Control and Data Acquisition (SCADA) system. Quite often, the real-time data gathered by the SCADA systems can be analyzed to predict the effects of contingencies (e.g., short circuits that may cause outages of lines or generators, thus overloading other lines or causing other limit violations) and possible remedial actions to guard against such contingencies. The computer systems used to conduct such analysis are known as Energy Management Systems (EMS), and these control centers are often called SCADA-EMS (or simply EMS).

The SCADA systems are connected by communications channels (usually microwave today) to all the substations and generating stations in the control area, and the real-time data are gathered by the SCADA system polling the remote terminal units (RTUs) at the substations. That SCADA system may have communications with other SCADA systems in neighboring control areas or with other control centers in the same area.

In recent years, intelligent electronic devices (IEDs) have proliferated in the substations and generating stations. These microprocessor-based devices perform the usual local functions of control, protection, and switching, but they can also perform other enhanced functions, including the gathering and storage of data at much faster rates. These IEDs are usually accessible remotely, and many utilities use Internet connectivity to conduct normal engineering functions on such substation equipment.

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

Given the increasing demand for electric power, it is inevitable that the electric power industry will continue to seek ever-higher efficiencies in the existing grid, so as to minimize the expense of constructing new grid elements. Thus, interconnections within the various control centers of the grid must be taken as a given, with all of the vulnerabilities that such extensive interconnections imply.

There is broad agreement that the communications infrastructure that connects the substations to the control area SCADA systems, developed in the 1960s and 1970s, is too slow for today’s purposes.10 Faster communications will allow more wide-area (rather than local) and distributed (rather than central) control, which in turn may require distributed bases of real-time data that are gathered and stored using publisher-subscriber methods and middleware that monitors the quality of service (QoS).

An approach based on deploying a faster but isolated cyber-infrastructure for the power grid is conceptually the simplest. But in addition to its high cost, this approach, at least when taken to its logical extremes, also results in a loss of flexibility and convenience from the standpoint of many engineering and market functions, especially regarding intercommunications, interoperability, and rapid response. An alternative is to develop design guidelines for the evolving cyber-infrastructure that will allow the flexibility of interconnectivity but with controlled and managed risks of penetration. While this approach preserves the lower expenses associated with “piggybacking” on existing infrastructure, it has the major drawback that commercially available computer and communications infrastructures are neither secure enough nor robust enough to support such use.

The new cyber-infrastructure must be able to withstand various contingencies such as malicious threats, human errors, and environmental hazards. (Note that malicious threats may come from disgruntled employees and former employees who have detailed insider knowledge or from enemy nations or terrorists with access to expert knowledge.) Although the power grid must be able to withstand the threat of physical attack on generators and transmission lines, another security concern arises if an adversary can attack the power grid remotely.

In addition, the surprisingly large number of very large scale outages in the United States in the past 40 years raises the question of whether the infrastructure is reliable enough even in the absence of malicious misuse. Indeed, many of those outages could have been triggered maliciously or intentionally, exploiting exactly the same vulnerabilities that were the

10

United States Department of Energy, Office of Electric Transmission and Distribution, National Electric Delivery Technologies Roadmap, January 2004; available at http://www.electricdistribution.ctc.com/pdfs/tech_roadmap.pdf.

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

cause of the accidental outages. (Some of these outages occurred even though operators had previously insisted that various improvements that had been made in the grid technology would prevent such occurrences in the future.)

The main technical and administrative challenge for the future is not merely to secure the cyber-infrastructure of the grid today, but to guide the evolution of the cyber-infrastructure so that the grid is not vulnerable to cyberattacks and the propagating of accidental effects. As the main purpose of the cyber-infrastructure is to operate the grid reliably, securely, and economically, the advances in communications, computation, and control technologies will continue to push the cyber-infrastructure in directions that accommodate this improved control. A major task is then to determine design factors that meet the cybersecurity and reliability objectives in ways that are consistent with the control and economic objectives of the grid. The entirety of an interconnected grid must be considered as a single system, and developed and analyzed accordingly. This is difficult because of the extent to which the providers are independent and disjoint private entities. However, neither total deregulation nor complete government regulation is compatible with the needs stated above.

Some of the important cybersecurity issues for the grid include the following:

  • Developing lightweight cybersecurity mechanisms. Computers used for operational control generally run at high duty cycle because of premiums on efficiency and on controlling many systems, and thus there is often little capacity for undertaking activities such as anomaly detection, virus updates, or penetration testing. Although advances in hardware capability could, in principle, mitigate this problem, historically utility operators have adopted a relatively slow refresh rate for technology. Lightweight mechanisms and testing practices that consume minimal system resources while being used on an operational system would be more likely to be used in practice.

  • Developing better forensics for SCADA systems and programmable logic controllers. For example, logs for these systems generally record physical parameters but not the inbound commands or communications or the originator of those commands. Anomaly detection is also uncommon in these systems, although the highly structured and stylized nature of commands to these systems should make it easier to detect anomalies.

  • Implementing cybersecurity measures that can operate in an interrupt-heavy real-time environment. Because programmable logic controllers operate multiple devices, the timing of interruptions from

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

various devices can make program flow highly unpredictable and can thus complicate any security analysis that may be performed.

In general, cybersecurity issues for the electric power grid include (but are not limited to) the possibility of electronically compromising substations operated remotely, tricking operators of control centers into doing harmful things with false or delayed data, managing the high cost of falsely identifying an authorized party as an unauthorized one, and modeling the electric grid in order to understand its vulnerabilities.

8.4.3
Web Services

Web services provide application components and attendant IT resources with defined interfaces that interact over the Web. Any given Web service is also frequently used by multiple organizations.

The commercial objectives are rapid deployment of business offerings, shorter process cycles, synergy between businesses, and customer benefits through integration. One example of Web services is the programmatic interfaces made available through the World Wide Web (WWW) that serve the function of application-to-application communication. These Web services provide a standard means of interoperating between different software applications, running on a variety of platforms and/or frameworks.

WWW services are characterized by their interoperability and extensibility, as well as by their XML-based machine-processable descriptions. A second example of Web services is the Universal Description, Discovery and Integration (UDDI) specification, which defines a registry service for other Web services; this registry service manages information about service providers, service implementations, and service metadata. A third Web service is online storage and distributed data repositories that applications developers can exploit. Web services in general can be chained together in a loosely coupled way to create complex and sophisticated value-added services.

Many of the security issues that arise in Web-based computing are similar to those for local applications, but Web services have a number of additional security concerns that involve networking in an open environment. For example, Web services are loosely coupled in a more or less ad hoc manner. Thus, a dynamically established security model is necessary—that is, the security model is necessarily contextual—and thus requires an integration of intent over all of the components. How should such models be created? What does trust mean in such an environment? What security functionality is required of each component? How is such functionality asserted and substantiated by the application? How are

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

authentication information and storage access rights passed from service to service in a dynamically assembled application? What is the functionality needed in tools for the analysis and specification of security policies for distributed storage?

8.4.4
Pervasive and Embedded Systems

Pervasive computing devices include sensor networks, ad hoc networks (e.g., car-to-car), and human-embedded processors, as well as the devices described in Section 2.1 (Interconnected Information Technology Everywhere, All the Time). Because pervasive computing systems will have programmable hardware processors and will be interconnected, they are subject to all the software and network-based security vulnerabilities that can affect other computing devices (e.g., dedicated computing systems). Furthermore, it is likely that linking together pervasive computing devices will result in the accessibility of significant amounts of potentially sensitive information, personal and otherwise. Such concentration poses both technical risk, because the information can be stolen or corrupted, and social/organizational risk, because the information can be misused by its custodians. The need to protect this information against these risks thus raises the level of security robustness that one might require of the information technology storing this information.

As in many of today’s computing devices, the vulnerabilities in pervasive computing will include those that arise from the complexity of the software likely to be used, the likely extensibility of the software built into these systems, and the connectivity of these devices. However, pervasive computing will call for security solutions and approaches to scale upward by many orders of magnitude—to accommodate many more components, many more systems, many more naïve users, many more deployment locations. Pervasive computing systems will also differ from today’s systems in several other ways:

  • They may be significantly resource-constrained. For example, the battery energy or computing capability may be limited, implying potentially undesirable trade-offs between security and cost or security and performance, as the implementation of security may be costly in computational capability.

  • They will be used by people with little knowledge of computing in any form, and thus cannot require a significant degree of attention to the details of security at all. Such users should be, at most, required only to specify the parameters of a desired security policy. Authentication of a person should be handled easily and naturally, without much cognitive effort, and the strength of the authentica-

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

tion should be matched automatically to the sensitivity of the application. See Section 6.1 (Usable Security) for more on this point.

  • They will be smaller in size, which may mean increased difficulty in creating and implementing good human interfaces for security.

  • They are far more subject to physical compromise (e.g., they may be unattended) and thus more susceptible to adversarial takeovers in hostile environments, destruction, theft, and loss.

  • System architectures for embedded systems need to be flexible enough to support the rapid evolution of security mechanisms and standards and need to provide in situ capabilities for remote upgrade.

One illustrative vulnerability in pervasive and embedded systems (and personal computers [PCs] as well!) arises from the fact that the programming of many such systems depends on the availability of a read-only memory (ROM) chip whose program contents assume control of the system upon power-up. In earlier days, a ROM chip could not be upgraded without the physical access to remove and replace the chip itself. But today, most systems use Flash ROM chips that can be rewritten from software—a feature that greatly facilitates and reduces the cost of upgrades.

A device with a Flash ROM is thus potentially subject to compromise. For example, in 1999, the Chernobyl virus attacked the BIOS chip in many PC-compatible computers, with the result that the program stored in the BIOS memory chip of approximately 300,000 computers was corrupted. Once the programming in Flash ROM has been corrupted, its contents remain even after system restarts, power-off-and-on sequences, and system reinstallation. In other words, Flash ROM corruption defeats many commonly used recovery techniques.

What kinds of problems could be caused by a Flash ROM corruption? Kocher et al. use the example of an antiaircraft radar with an embedded real-time operating system.11 Within the system are several Flash ROM chips, and a corruption is introduced into one of them. Because the ROM programming is loaded into the system kernel on boot-up, it has trusted access to the entire bus—and its purpose is to cause the radar to ignore certain types of radar signatures.

Physical and side-channel attacks are also possible in systems in which an adversary cannot be denied physical access. Such attacks can be invasive or noninvasive attacks. Invasive attacks against integrated

11

Paul Kocher et al., “Security as a New Dimension in Embedded System Design,” Design Automation Conference, June 7-11, 2004, San Diego, Calif.; available at http://palms.ee.princeton.edu/PALMSopen/Lee-41stDAC_46_1.pdf.

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

circuits usually require expensive equipment. Examples include probing and reverse-engineering of the chip. In such attacks, the chip is depackaged and the chip layout is reconstructed through microscopy and the removal of the covering layers. Noninvasive attacks do not require the device to be opened; they include timing attacks, power analysis attacks, fault induction techniques, and electromagnetic analysis attacks.

8.5
SECURE NETWORK ARCHITECTURES

It is often observed that the principles on which the Internet is based were developed in a time in which trust among its users was the order of the day. But such a situation no longer obtains, so an interesting question—with enormous practical relevance—is how a new Internet might be designed and architected with security being a principal feature.

In its purist form, the Internet can be conceptualized as a network that does its best to transmit bits between end-user nodes. These bits are not differentiated from one another, and a bit associated with a virus is delivered in exactly the same way as is a bit associated with a query to a search engine. The processing of these bits, from reassembly to interpretation, is the responsibility of the end nodes. This end-to-end principle, and the lack of intelligence at the center of the Internet, has been a powerful force for innovation and cost-effective network implementation. But this principle—at least in its strongest, most pure form—has come under intense scrutiny, as it is also at the heart of many security difficulties.

In most next-generation Internet conceptualizations, the end-to-end principle is modified to some extent in the name of enhancing security. Clark, for example, argues that any future Internet will have to divide responsibility for security among three elements: the network, the end node system, and the application.12 As an illustration, he argues that the network ought to be able to quarantine an end node that is behaving antisocially (e.g., if it is infected by a virus that causes known antisocial behavior, or if it is acting as a zombie in a botnet).

A second view of modifying the end-to-end principle is offered by Casado et al. and their Secure Architecture for the Networked Enterprise (SANE) architecture.13 SANE is an architecture for Transmission Control Protocol/Internet Protocol (TCP/IP) enterprise networks that relies on a logically centralized Domain Controller (DC) with a complete view of

12

David D. Clark, “Requirements for a Future Internet: Security as a Case Study,” Massachusetts Institute of Technology, Computer Science and Artificial Intelligence Laboratory, December 2005; available at http://find.isi.edu/presentation_files/Clark_Arch_Security.pdf.

13

Martin Casado et al., “SANE: A Protection Architecture for Enterprise Networks”; available at http://yuba.stanford.edu/~casado/sane.pdf.

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

the network topology to construct routes between any two points on the network. Hosts can only route to the DC, and users must first authenticate themselves with the DC before they can request a capability to access services and end hosts. Once the DC provides a route between two points on the network, that route can only be traversed through a single protection layer that resides between the Ethernet and IP layer. This architecture enables enforcement to be provided at the link layer, to prevent lower layers from undermining it. In addition, it hides information about topology and services from those without permission to see them. And, it requires only one component to be trusted—namely, the DC—in contrast to standard architectures in which multiple components must be trusted (e.g., firewalls, switches, routers, and authentication services).

A different approach is offered by Bryant et al., whose Poly2 architecture separates network services onto different systems, uses application-specific (minimal) operating systems, and isolates specific types of network traffic (e.g., administrative, security-specific, and application-specific traffic).14 Using separate networks for carrying traffic of different types (and hence different sensitivities) allows for better separation of concerns, reduces interference, and increases confidence in the authenticity of the information. Trust in the overall architecture arises from the separation of untrusted systems and services, which also helps contain successful attacks against individual systems and services.

From a programmatic standpoint, the National Science Foundation’s CISE-supported Future Internet Network Design (FIND) initiative is an example of an effort to develop a new Internet architecture from the ground up. (CISE refers to the NSF’s Directorate for Computer and Information Sciences and Engineering.) Broadly speaking, the FIND initiative investigates two issues: (1) the requirements for the global network of 15 years from now and (2) how to reconceptualize tomorrow’s global network today if it could be designed from scratch. Part of the FIND initiative is of course security. This focus is motivated by the simple observation that Internet security is increasingly worse with time. Clark’s arguments on security (above) were presented at a FIND conference in 2005.15

8.6
ATTACK CHARACTERIZATION

A problem very closely related to anomaly detection and forensics is that of attack characterization, sometimes also called attack assessment.

14

Eric Bryant et al., “Poly2 Paradigm: A Secure Network Service Architecture,” Proceedings of the 19th Annual Computer Security Applications Conference, IEEE Computer Society, Washington, D.C., 2003, p. 342.

15

David D. Clark, “Requirements for a Future Internet: Security as a Case Study,” Massachusetts Institute of Technology, Computer Science and Artificial Intelligence Laboratory, December 2005; available at http://find.isi.edu/presentation_files/Clark_Arch_Security.pdf.

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

Used more or less interchangeably, these terms refer to the process by which systems operators learn that an attack is under way, who is attacking, how the attack is being conducted, and what the purposes of the attack might be.

The first problem is that while the actions of a potentially hostile party may be visible in cyberspace, the intentions and motivations of that party are usually quite invisible. How should a systems operator or owner distinguish between an event that is a deliberate cyberattack intended to compromise an IT system or network and other events, such as accidents, system failures, or hacking by thrill seekers.

A second problem is that a cyberattack may strike multiple targets. How would decision makers know that the same attacker was behind those multiple strikes? Discussed in Section 5.2 (Misuse and Anomaly Detection Systems), this question reflects the issue of large-scale situational awareness. From the standpoint of the defender’s perspective, it might well be useful to know if attacks on given sites were in fact correlated in time, in space, in origin, or in type. Collecting such data is difficult enough, since it may be quite voluminous. But analyzing these data to uncover such correlations and presenting the resulting information to decision makers in a comprehensible form present many interesting intellectual challenges.

A third problem is that the identity of an attacker may well be uncertain, for an attacker may well seek to deny provenance or attribution information (Section 5.1, Attribution) that might establish his or her identity. But under some circumstances it may be as important to eliminate certain parties as not being responsible for an attack. Consider a large-scale cyberattack that damages key national infrastructure and is also made public. A variety of groups may seek to take credit for such an attack even if they have had nothing to do with carrying out the attack. In these circumstances, policy makers would surely need to be able to distinguish between valid and bogus claims. Ascertaining the identity of an attacker is a forensics problem (Section 7.3, Forensics) writ large, but it also entails pre-incident collection and analysis of possible attack signatures associated with different parties.

8.7
COPING WITH DENIAL-OF-SERVICE ATTACKS

8.7.1
The Nature of Denial-of-Service Attacks

Denial-of-service (DOS) attacks are coordinated attempts to overwhelm a given network resource (e.g., a Web server) with malicious traffic or requests for information to such an extent that legitimate traffic cannot get through. Such attacks are also often distributed in nature, originating from numerous and seemingly unrelated computers (often called zom-

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

bies, slaves, or bots) from around the Internet (Box 2.3, On Botnets). In most cases, the attacking machines are vulnerable computers that have been infected by malicious software or otherwise compromised by the real attacker (or handler), who controls the attacking machines or botnet from afar either by communicating directly with the machines or by an indirect control method such as passing instructions to the machines through an Internet relay chat (IRC) channel.

Distributed denial-of-service attacks (DDOS) can target the network link or the end node.16 A DDOS attack on the network link seeks to make the targeted link severely congested. A DDOS attack on an end node seeks to consume the node’s resources, such as the central processing unit (CPU) cycles. For example, the attack may cause unnecessary processing (application-level attack) or may seek to consume memory by memory exhaustion. Attacks on an end-node DDOS usually fall into one of two types: bandwidth attacks or resource (or protocol) attacks.17 Bandwidth attacks can be direct floods of TCP, ICMP, or UDP packets seeking to overwhelm a machine, or they can be so-called reflector attacks in which the attacking machines use spoofed packets to appear as if they are responding to requests from the targeted machine. Resource attacks can entail consuming all available connections on a machine by taking advantage of the way that network communications protocols work (e.g., by using half-open TCP requests) or attempting to crash an intended target outright by using malformed packets or by exploiting weaknesses in software.

All of these DDOS attacks can be quite formidable and difficult to repel. For example, as a recent paper notes, even Internet heavyweights are not immune from them: in “June 2004, the websites of Google, Yahoo! and Microsoft disappeared for hours when their servers were swamped with hundreds of thousands of simultaneous webpage requests that they could not possibly service” in a widespread DDOS attack.18

8.7.2
Responding to Distributed Denial-of-Service Attacks

The first step in responding to a DDOS attack is, of course, detecting it—the earlier the better. Administrators use a number of traffic- and network-monitoring tools (e.g., intrusion-detection systems, firewalls, and

16

Xuhui Ao, Report on DIMACS Workshop on Large-Scale Internet Attacks, September 23-24, 2003; available at http://dimacs.rutgers.edu/Workshops/Attacks/internet-attack-9-03.pdf.

17

Shibiao Lin and Tzi-cker Chiueh, “A Survey on Solutions to Distributed Denial of Service Attacks,” (TR-201) RPE report, September 2006; available at http://www.ecsl.cs.sunysb.edu/tr/TR201.pdf, p. 8.

18

Shibiao Lin and Tzi-cker Chiueh, “A Survey on Solutions to Distributed Denial of Service Attacks,” (TR-201) RPE report, September 2006; available at http://www.ecsl.cs.sunysb.edu/tr/TR201.pdf, p. 3.

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

so on) to stay abreast of the health of their resources. However, inevitably one way of detecting a DDOS attack is by getting a call from a user that a given resource or Web site is unavailable. In any case, once detected, there are today several strategies for addressing a DDOS attack:

  • Respond and block. This approach involves detecting and characterizing the attack and ideally gaining some kind of “signature” from the attack that can be shared with others who might be affected. This signature can then be used to filter the malicious network traffic, often by the Internet Service Provider (ISP) rerouting traffic for the victim through a “scrubber” node.19 In practice, if an attack is large enough, ISPs can “blackhole” offending IP addresses or eliminate their routes. That is, the outside path through which the malicious traffic comes can be shut down, thereby keeping at least the targeted service available to local clients. More importantly, this approach avoids collateral damage to other sites downstream of the chokepoint network link.

  • Hide. In this response, a Web site’s true end points are hidden or are set up with very good filters. Traffic is then routed via an overlay network that hides the final destination and spreads the load. An example of this approach has been taken by Keromytis et al. in the design and implementation of Secure Overlay Services.20

  • Minimize impact. This approach involves simply trying to ride a DDOS attack out, either by adding more bandwidth or by using a content distribution network (e.g., Akamai) to lessen the load on a Web site’s resources (Box 8.3). Also, tools such as CAPTCHAs21 can be used to differentiate and filter legitimate traffic from illegitimate traffic. Many Web sites also choose to degrade their services to all users when under such an attack in order to continue providing what are seen as critical services to legitimate users.

  • Make the attacker work. For attacks aimed at CPU time or memory consumption, a common strategy is to force the attacker to solve

19

Robert Stone, “An IP Overlay Network for Tracking DoS Floods,” in 9th Usenix Security Symposium, 2000; available at http://www.usenix.org/publications/library/proceedings/sec2000/full_papers/stone/stone.ps.

20

A.D. Keromytis, V. Misra, and D. Rubenstein, “SOS: Secure Overlay Services,” pp. 61-72 in Proceedings of ACM SIGCOMM, August 2002; available at http://citeseer.ist.psu.edu/keromytis02sos.html.

21

CAPTCHAs are an automated means for attempting to determine whether or not a computer or network user is a human being. (CAPTCHA is an acronym for “Completely Automated Public Turing Test to Tell Computers and Humans Apart.”) They often involve changing a graphic in such a way that a human can still determine what it shows, while a computer or bot would have trouble. For more information, see http://www.captcha.net.

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

BOX 8.3

Attack Diffusion

As noted in Section 2.1 (Interconnected Information Technology Everywhere, All the Time) in this report, increased interconnection creates interdependencies and vulnerabilities. Nevertheless, it may also be possible to leverage such interconnections to defensive advantage.

To illustrate the point, consider a denial-of-service (DOS) attack, which fundamentally depends on volume to saturate a victim.1 Interconnection could, in principle, enable the automatic diffusion of incoming traffic across multiple “absorption servers.” (An absorption server is intended primarily to absorb traffic rather than to provide full-scale services.) While no one would-be victim could reasonably afford to acquire a large enough infrastructure to absorb a large DOS attack, a service company could provide a diffusion infrastructure and make it available to customers. When a customer experienced a DOS attack, it could use its connectivity to shunt the traffic to this diffusion infrastructure.

At least one company provides such a service today. But the approaches are not without potential problems. For example, the Domain Name System may be used to diffuse requests to one of a number of servers. But doing so reveals the destination address of individual absorption servers, which in principle might still leave them vulnerable to attack. Methods to hide the individual absorption servers are known, but they have potential undesirable effects on service under non-attack conditions. Further, automatic attack diffusion can conflict with occasional user or Internet service provider desires for explicit control over routing paths.

  

1David D. Clark, “Requirements for a Future Internet: Security as a Case Study,” December 2005; available at http://find.isi.edu/presentation_files/Clark_Arch_Security.pdf.

some sort of puzzle. A good puzzle is hard to compute but relatively cheap to check. Examples include calculating a hash function where some bits of the input are specified by the defender, and the output has to have some number of high-order bits that are zeroes. Most such schemes are based on a 1992 proposal by Dwork and Naor22; adaptations to network denial-of-service attacks include TCP Client Puzzles23 and TLS Puzzles.24

22

Cynthia Dwork and Moni Naor, “Pricing via Processing or Combatting Junk Mail,” Proceedings of the 12th Annual International Cryptology Conference on Advances in Cryptology, 740: 139-147, Lecture Notes in Computer Science, Springer-Verlag, London, 1992.

23

A. Juels and J. Brainard, “Client Puzzles: A Cryptographic Countermeasure Against Connection Depletion Attacks,” pp. 151-165 in Proceedings of the 1999 Network and Distributed Security Symposium, S. Kent (ed.), Internet Society, Reston, Va., 1999.

24

Drew Dean and Adam Stubblefield, “Using Client Puzzles to Protect TLS,” Proceedings of the 10th Conference on USENIX Security Symposium, 10: 1, 2001, USENIX Association, Berkeley, Calif.; available at http://www.csl.sri.com/users/ddean/papers/usenix01b.pdf.

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

However, as with most areas of cybersecurity, attackers and defenders are locked in an ongoing arms race trying to stay abreast (or ahead) of each other’s techniques and tactics; developments are occurring at a rapid pace. Still, there are no ideal, comprehensive solutions for dealing with DDOS attacks, owing in large part to the sheer number and availability of attacking machines. Indeed, attackers are moving toward using ever-larger numbers of machines in their attacks (i.e., larger botnets), more evenly distributed around the Internet, and are attempting to make their attacks as indistinguishable as possible from legitimate traffic so as to confound the filters and response mechanisms used by defenders.

There are three common motives for denial-of-service attacks: vandalism, revenge, and extortion. The different types of attacks suggest the need for different response strategies.

  • Pure vandalism in some sense is the hardest to deal with, since it is typically an impulse crime committed without forethought and against more or less any site on the network. Fortunately, the effects are rarely long-lasting. More ominously, this type of attack may have fallen in importance not because of any substantive defensive measures but because of the shift by perpetrators to profit-motivated cybercrime.

  • The second cause—revenge—is generally more annoying than serious. Typically, one hacker will annoy another; the offended party replies by launching a denial-of-service attack against the offender. These attacks—known as packeting—tend to be of limited duration; however, other users sharing the same access link are not infrequently affected as well.

  • Profit-motivated DDOS attacks, and in particular extortion attacks, are in some sense easier to deal with. The targets are more predictable and hence can take defensive measures. Nonetheless, there is often insufficient time for a response. One common victim has been sports gambling Web sites, since they sell a time-sensitive product. (While online gambling is illegal in the United States, it is legal in other parts of the world, and U.S. companies often suffer collateral damage when flooding attacks against the gambling sites overload chokepoint network links.) Conventional law enforcement—“follow the money”—may be the most promising avenue, although the perpetrators generally employ money-laundering in an attempt to evade prosecution.

8.7.3
Research Challenges

Research challenges in dealing with denial-of-service attacks focus on how to identify and characterize DDOS attacks and how to mitigate their

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

effects. In the first area, which includes the reliable detection of large-scale attacks on the Internet and the real-time collection and analysis of large amounts of attack-monitoring information, Moore et al. have developed a technique, known as backscatter, for inferring certain DOS activity.25 The technique is based on the fact that DDOS attackers sometimes forge the IP source address of the packets they send so that the packets appear to the target to be arriving from one or more third parties. However, as a practical matter, these fake source addresses are usually generated at random (that is, each packet sent has a randomly generated source address). The target, receiving a spoofed packet, tries to send an appropriate response to the faked IP address. However, because the attacker’s source address is selected at random, the victim’s responses are scattered across the entire Internet address space (this effect is called backscatter). By observing a large enough address range, it is possible to effectively sample all such denial-of-service activity on the Internet. Contained in these samples are the identity of the victim, information about the kind of attack, and a timestamp that is useful for estimating attack duration. The average arrival rate of unsolicited responses directed at the monitored address range also provides a basis for estimating the actual rate of the attack being directed at the target.

There are several limitations to this technique. The most important is the assumption that attack packets appear to come from forged source addresses. While this was certainly true of the first generation of DDOS attacks, many attackers no longer bother with such forgery. While the exact extent of forgery is debatable, some experts claim that the large majority of attacks no longer use forged addresses. Two of the reasons are good; one, though, is cause for concern. First, operating system changes in Windows XP Service Pack 2 make address forgery harder. Second, a number of ISPs follow the recommendations in RFC 2827 and block (many) forged packets.26 Forgery is often unnecessary, however; source address-based filtering near the victim is rarely possible, and there are sufficiently many attack packets that effective tracing and response are difficult.

The second area—mitigating the effects of DDOS attacks—spans a number of topics. One important topic is the development of better filters and router configurations. For example, the optimal placement of filters to maximize benefit and minimize negative impact is not easy to determine. Another example is the development of network-layer capabilities that

25

David Moore et al., “Inferring Internet Denial-of-Service Activity,” ACM Transactions on Computer Systems (TOCS), May 2006; available at http://www.caida.org/publications/papers/2001/BackScatter/usenixsecurity01.pdf.

26

P. Ferguson and D. Senie, RFC 2827, Network Ingress Filtering: Defeating Denial of Service Attacks Which Employ IP Source Address Spoofing, May 2000. Also known as BCP 38.

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

can be used to filter traffic efficiently. An example is the implementation of “pushback” configurations, an approach to handling DDOS attacks that adds functionality to routers so that they can detect and preferentially drop packets that probably belong to a DDOS attack, while also notifying upstream and downstream routers to do likewise.27 Such an approach requires coordination between various routers beyond that which is available through standard routing protocols.

Another important topic relates to scale. Today’s solutions do not scale up to be able to address the numbers of attackers that are seen from today’s botnets. Therefore, one major research area is to develop scalable solutions for addressing DDOS attacks or for weathering them (e.g., content distribution networks). Other challenges involve developing ways to ensure that computers and their users are less susceptible to compromise by attackers or malicious code, thereby diminishing the resources available for attackers’ use in botnets. Additional DDOS-related research could also be useful in areas such as network protocols, network infrastructure, network flow analysis and control, metrics for measuring the impacts of DDOS attacks, and better forensic methods and techniques for tracing and catching attackers.28

Still another topic is organizational and institutional. Because certain promising approaches to dealing with DDOS attacks depend on cooperation between ISPs (some of which may be in different countries and subject to different laws), finding ways to encourage and facilitate cooperation is important.29 Research on this topic might include how responsibility and obligation for responding to attacks should be shared between ISPs and their customers; what kinds of business service model are needed; how to build formal collaborations for automated coordination among different sites, ISPs, and various agencies; and how to incentivize ISPs to deploy defensive measures.

27

For more information on pushback, see Ratul Mahajan, Steven M. Bellovin, Sally Floyd, John Ioannidis, Vern Paxson, and Scott Shenker, “Controlling High Bandwidth Aggregates in the Network,” Computer Communications Review 32(3): 62-73, 2002.

28

For additional information on DDOS attacks, see Jelena Mirkovic et al., A Taxonomy of DDoS Attacks and DDoS Defense Mechanisms, Technical Report #020018, University of California, Los Angeles, Computer Science Department, available at http://www.eecis.udel.edu/~sunshine/publications/ucla_tech_report_020018.pdf [undated]; Xuhui Ao, Report on DIMACS Workshop on Large-Scale Internet Attacks, Center for Discrete Mathematics and Theoretical Computer Science (DIMACS), available at http://dimacs.rutgers.edu/Workshops/Attacks/internet-attack-9-03.pdf, 2003; and Rich Pethia, Allan Paller, and Eugene Spafford, “Consensus Roadmap for Defeating Distributed Denial of Service Attacks,” Project of the Partnership for Critical Infrastructure Security, SANS Institute, available at http://www.sans.org/dosstep/roadmap.php, 2000.

29

Xuhui Ao, Report on DIMACS Workshop on Large-Scale Internet Attacks, September 23-24, 2003; available at http://dimacs.rutgers.edu/Workshops/Attacks/internet-attack-9-03.pdf.

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

As one example, the entire community of ISPs would benefit from knowing the frequency of DOS attacks. ISPs are aware (or could be aware) of DOS attacks through the measurements that they ordinarily make in the course of their everyday operations, since sustained rates of packet drops by routers, observable via the simple network management protocol (SNMP), frequently indicate the existence of an attack. However, for competitive reasons, this information is rarely disclosed publicly, so the community cannot develop a complete picture of the situation. Research (or at least investigation) is needed to determine mechanisms that would encourage the disclosure of such data to an independent third party and the publication of a sanitized version of these data.

8.8
DEALING WITH SPAM

Spam—what might loosely be defined as unsolicited e-mail sent en masse to millions of users—has evolved from a minor nuisance to a major problem for the Internet, both as a mechanism for delivering attacks (e.g., phishing) and as a means for propagating other types of attack (e.g., viruses). Spam is undesirable from the recipient’s standpoint because he or she must continually spend time and effort to deal with unwanted e-mails. In small volume, it would be easy to delete unwanted e-mails that can be identified from the header. But spam e-mail often uses deceptive headers in order to persuade users to open it (e.g., rather than saying “Subject: Viagra for sale,” the header will say “Subject: Greetings from an old friend”), and by some accounts, spam accounts for over 90 percent of e-mail sent on the Internet.30 Thus, it is not unreasonable to estimate that individuals spend hundreds of millions of person-hours per year in dealing with spam. Today, spam threatens to undermine the stability and usefulness of networked systems and to impose significant economic costs and lost productivity.

Spending valuable time dealing with a nuisance is bad enough, but spam can also have serious consequences. For example, spam can clutter one’s mailbox so that desired e-mails are missed or other e-mails cannot be received; it forces ISPs or users to implement filters that may inadvertently filter wanted messages. Because spam can prevent a user from doing useful things in his or her computing environment, spam can be regarded as a kind of denial-of-service attack against individual users.

Spam can cause harm. One risk is a form of online identity theft. Because it is easy to forge an electronic return address (so that an e-mail appears to have been sent from the forged address), spam senders often insert legitimate e-mail addresses (e.g., those harvested from online bul-

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

letin boards, chat rooms, and the like) as the purported sender of their spam e-mail. The reputation of the legitimate e-mail user is thus compromised, and the spam also generates for the legitimate user a flood of “mailer-rejection notices” from e-mail systems that reject the spam e-mail for some reason.

A second risk is that spam can compromise the integrity of the user’s computing environment, causing it to do things that are undesired from the user’s point of view. E-mail systems are often designed to allow users to open and execute e-mail attachments with a simple mouse click, or to access Web pages referenced in an embedded link, or to display images or messages formatted as “rich text.” Such functionality increases the convenience and enhances the utility of e-mail for the user. But when spammers exploit these features, the result can be that a hostile attachment is executed, a user-compromising Web page is accessed (merely by accessing it), or a trap door is opened simply by viewing the e-mail.

It is true that clandestine applications can be delivered through many different mechanisms, and in principle there is nothing special about spam e-mail as a delivery mechanism. But in practice, the ease with which e-mail can be delivered suggests that e-mail—and payloads that it carries—will be used aggressively in the future for commercial purposes.31

Once compromised, the user’s computing environment becomes a platform for active threats such as the following:

  • Divulging the personal information resident on the user’s computer. Especially common would be financial records that are stored by various personal money management systems, but in the future such information may include medical records. Such information could be used to target users with specific and personalized communications that may be threatening. An example of a targeted personal e-mail would be: “Did you know the odds of dying with your disease are much higher now?”

  • Displaying advertisements by surprise (e.g., pop-under ads).

  • Tracking the user’s information-seeking behavior (e.g., what Web sites have been visited). Today, the use of such traces is most often limited to identifying when a user is visiting a site that was visited in the past, but there is nothing in principle that prevents the entire

31

It is also true that the root cause of the problems caused by Trojan horses is insecurities in the user’s computing environment. Thus, one could argue, with considerable force and reason, that eliminating these insecurities would eliminate Trojan horse problems as well as a host of other problems. On the other hand, it is unrealistic to expect that such insecurities would ever be eliminated entirely. More to the point, users will not be relieved to know that the reason they are suffering from Trojan horses is that their operating systems are insecure.

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

trace from being made public knowledge. (For example, consider spyware from a group that opposes pornography that reports your use of sexually explicit Web sites to a public database.)

  • Launching attacks on other computer systems without the user’s knowledge (e.g., as part of a botnet).

From an institutional standpoint, spam consumes significant amounts of bandwidth, for which ISPs and network operators must pay. Indeed, large volumes of spam are in some ways indistinguishable from a denial-of-service attack. Thus, spam can have important security implications on a regional or national scale as well as being simply annoying to individual users. ISPs and users may also bear the cost and inconvenience of installing and maintaining filters to reduce spam volumes, as well as of maintaining a larger infrastructure to accommodate the vast amount of spam flowing through their networks (more servers, routers, adminstrators, floor space, power, and so on). (An interesting question is thus how the collective cost to individuals and business compares with the benefits gained collectively by the spam senders and those who actually buy something as a result of the spam.)

Spam is only one dimension of a commercial environment that bombards citizens with junk mail (e.g., catalogs and endless advertising pieces); long, unsolicited voicemails on our telephone mail systems; and unwanted faxes. But spam is different from the others in at least two significant ways. First, the costs per message to transmit spam e-mail and similar electronic messages is much smaller by several orders of magnitude than that for postal mail or telephone calls. Second, spam can be more deceptive than junk snail mail (junk faxes and telemarketing phone calls are annoying but are small fractions of the total fax and phone traffic). Before it is opened, spam e-mail can have the identical look and feel of a legitimate e-mail from an unknown party.

Policy makers at both the federal and state levels are seeking legislative remedies for spam, such as the CAN-SPAM Act of 2003 (17 U.S.C. 103). However, crafting appropriate and workable legislation has been problematic, with at least four separate dimensions that create difficulty:

  • As a commercially oriented activity, some forms of spam do create some economic benefit. Some small fraction of the spam recipients do respond positively to unsolicited e-mail that promotes various products or services. In this regard, it is important to remember that unsolicited commercial e-mail does not consist solely of Nigerian bank fraud messages or ads for Viagra, but also includes ads for cars, software, sunglasses, and vacations. Furthermore, the economics of e-mail are such that if only a very small fraction of recipi-

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

ents of a given spam mailing respond positively, that is sufficient to make the sending of the original spam turn a profit.

  • Defining spam through a legislative process is very difficult. What is spam for one person may be an interesting curiosity to another. Consequently, it is very difficult to develop regulations that capture the notion of spam in a sufficiently precise manner to be legally enforceable and yet sufficiently general that spam senders cannot circumvent them with technical variations.

  • Spam can be sent with impunity across national borders. Regulations applying to domestic spam senders can easily be circumvented by foreign intermediaries.

  • Spam is arguably a form of free speech (albeit commercial speech). Thus, policy makers seeking to regulate spam must tread carefully with respect to the First Amendment.

In the long run, addressing the spam problem is going to involve technology and policy elements. One important technical dimension is the anonymity of spam. Because spam senders realize the unpopularity of the e-mail that they produce, today’s spam senders seek a high degree of sender anonymity to make it difficult or impossible for the recipient to obtain redress (e.g., to identify a party who will receive and act on a complaint). Thus, the provenance of a given e-mail is one element in dealing with the spam problem, suggesting the relevance of the attribution research of Section 5.1, “Attribution.”

But even if the attribution problem itself is solved, there are complicating factors regarding spam. For example, as far as many people are concerned, the senders of e-mail fall into three categories—those known to the receiver to be desirable, those known to be undesirable, and those of an unknown status. Provenance—at least as traditionally associated with identity—does not help much in sorting out the last category. Moreover, botnets today send “legitimate” e-mail from compromised hosts—that is, if my computer is compromised so that it becomes a zombie in a botnet army, it can easily send spam e-mail under any e-mail account associated with my computer. That mail will be indistinguishable from legitimate e-mail from me (i.e., e-mail that I intended to send). Thus, preventing the compromise of a host becomes part of the complete spam-prevention research agenda.

Yet another technical dimension of spam control is a methodology to examine content as well as origin of e-mails.32 That is, how can a computer be trained to differentiate spam from legitimate e-mail? Most

32

Joshua Goodman, Gordon V. Cormack, and David Heckerman, “Spam and the Ongoing Battle for the Inbox,” Communications of the ACM, 50(2): 24-33, 2007.

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

spam-recognition systems today have at least one machine learning component that performs such differentiation based on examples of both spam and nonspam e-mail. Much of the progress in antispam research has involved improving the relevant machine learning algorithms as spammers develop more sophisticated means for evading spam-detection algorithms. Other relevant factors entail obtaining more examples of different kinds of spam (so that new kinds of detection-evasion techniques can be taken into account by spam detectors) and doing so more quickly (so that spammers have smaller windows in which to propagate their new variants).

Another dimension of spam-detection performance depends on the ability to extract the relevant content from the bits that actually constitute the e-mail. ASCII art, photographic images, and HTML encodings have all been used to evade filtering, with varying degree of success. Indeed, image-based spam, in which an e-mail contains an embedded image of a message, is quite common today. All of these methods are based on the fact that that extraction of the content is computationally intensive and thus impractical to perform on all incoming e-mails.

Spam is, by definition, a collection of many e-mails with identical content. So spam might be identified by virtue of the fact that many copies of it are circulating on the Internet—and there are ways that institutionally based spam filters could be able to identify a given e-mail as being a part of this category. The obvious countermeasure for the spammer is to make each message slightly different, but in a way that does not alter the core message of the spam e-mail, which itself suggests another research problem of identifying messages as “identical in semantic content” despite small differences at the binary level.

The economics of spam are also relevant. If the incremental cost of sending spam were higher, the volume of spam could be reduced significantly. But spammers are not the only parties to send e-mail in bulk—organizations with newsletters, for example, may send large volumes of e-mail as well. The imposition of a small financial cost per e-mail would do much to reduce spam, but it would be difficult to deploy and also would violate long-standing practices that make e-mail an effective mechanism of communication notwithstanding the spam problem. Other ways of imposing cost include requiring a time-consuming computation that makes it more difficult to send e-mails in bulk and requiring a proof that a human is involved in the sending of individual e-mails. How to impose costs on spammers, and only on spammers, remains an open technical and regulatory question.

Finally, as new communications channels emerge, new forms of spam are likely to emerge. For example, spam text messages to mobile and instant message spam are two relatively newer forms of spam. Future

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

spam variants may include exploits related to location-aware devices (e.g., advertisements tied explicitly to the user’s location) and spam and spam-like payloads other than text delivered to mobile devices such as cellular telephones. An example of the latter is that with the increasingly popular use of voice-over-IP, junk phone calls (also known as SPIT, for spam over Internet telephony) may come to be a problem in the future. Research will be needed to address these new forms of spam as well.

Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 181
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 182
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 183
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 184
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 185
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 186
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 187
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 188
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 189
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 190
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 191
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 192
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 193
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 194
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 195
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 196
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 197
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 198
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 199
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 200
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 201
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 202
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 203
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 204
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 205
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 206
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 207
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 208
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 209
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 210
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 211
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 212
Suggested Citation:"8 Category 5 - Illustrative Crosscutting Problem-Focused Research Areas." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 213
Next: 9 Category 6 - Speculative Research »
Toward a Safer and More Secure Cyberspace Get This Book
×
Buy Paperback | $67.00 Buy Ebook | $54.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Given the growing importance of cyberspace to nearly all aspects of national life, a secure cyberspace is vitally important to the nation, but cyberspace is far from secure today. The United States faces the real risk that adversaries will exploit vulnerabilities in the nation’s critical information systems, thereby causing considerable suffering and damage. Online e-commerce business, government agency files, and identity records are all potential security targets.

Toward a Safer and More Secure Cyberspace examines these Internet security vulnerabilities and offers a strategy for future research aimed at countering cyber attacks. It also explores the nature of online threats and some of the reasons why past research for improving cybersecurity has had less impact than anticipated, and considers the human resource base needed to advance the cybersecurity research agenda. This book will be an invaluable resource for Internet security professionals, information technologists, policy makers, data stewards, e-commerce providers, consumer protection advocates, and others interested in digital security and safety.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!