The Significance of Crosscutting Challenges and Technologies
This report discusses in detail the impact of potential terrorist attacks on our major systems—information, energy, transportation, and power, among others, as well as on our cities—with conventional, biological, chemical, nuclear, and information-warfare weapons.
An even more daunting set of challenges comes from the fact that the country’s major systems—energy and information, for example—are integrated and interoperable to a significant degree. Terrorists may seek to achieve their objectives by taking advantage of such couplings. They could attack a system at a point selected to produce serious reverberations throughout many of the other systems, thereby maximizing the damage from a single action. Or they could simultaneously attack critical nodes within several linked infrastructures to produce enormous overall damage to the nation—to its systems and its citizens.
A significant array of technology is already available—much of it developed by the DOD and DOE—that can be adapted to improve homeland security. But dedicated research and development carried out over the next decade must greatly expand this array in order to make the nation’s infrastructures and people more secure. These technological challenges, many of which are discussed in Chapters 2 to 10, can be met through an expanded, focused, and sustained set of research and development programs. Some of these programs recur in many of the chapters because efforts that contribute to the response to terrorism have some common technical elements. For this reason, these common elements deserve specific attention in this chapter, which addresses seven such crosscutting issues:
The need for the application and continued development of systems analysis and modeling capabilities to aid in threat assessment, in identification of infrastructure vulnerabilities and interdependencies, and in planning and decision making (particularly for threat detection, identification, and response coordination);
The development of standards and techniques to allow for the integrated management of data regardless of their source;
The utilization and development of sensors and sensor networks for the detection of conventional, biological, chemical, nuclear, and information-warfare weapons. To be effective and acceptable for operational use, these systems must have high sensitivity in detecting various threat agents yet must also function with low false-positive and false-negative rates;
The need for the use and continued development of robotic platforms to support mobile sensor networks for threat detection and intelligence collection. Robotic technologies can also assist humans in such activities as ordnance disposal, decontamination, debris removal, and fire-fighting;
The need to harden and protect the supervisory control and data acquisition (SCADA) systems that are widely used for operational control and monitoring of most components of the nation’s basic infrastructures.
The need to control access to our physical and information systems, thereby increasing security, while minimizing the impact of security measures on system performance. The committee focuses on biometrics as a promising set of technologies for this purpose.
All systems within the United States are operated or controlled at some level by humans. The design and deployment of systems to counter terrorism, being dependent on human command and control, must likewise take human factors and organizational-behavior principles into account.
The committee refers to these issues as “crosscutting” because they recur as ways to lessen many different vulnerabilities, but they could also be called “systems” issues because they are strongly interrelated. For example, improved techniques for data management will be a critical enabler for systems modeling, sensor networks, robotics, and biometrics. Systems analysis will lead to a better understanding of how to improve SCADA systems. And of course understanding human factors will be an essential step in successfully implementing any new counterterrorism technology.
The federal government will need to determine priorities, perform research, and support the implementation of technologies in all of these crosscutting areas, as well as other such areas that may emerge in the future. But because of the interdisciplinary nature of these topics, it is often not clear where the information to support decisions in these areas will come from. In Chapter 12, the committee discusses the need for a Homeland Security Institute to provide the needed technical analysis and support.
The remaining sections of this chapter discuss in greater detail the seven crosscutting issues listed above. The chapter concludes with a discussion of the kinds of research and development efforts that are needed, together with their associated structural and funding considerations—particularly within the U.S. government—to make an effective and aggressive science-and-technology agenda for counterterrorism a reality.
SYSTEMS ANALYSIS AND MODELING
Our nation’s infrastructures are individually complex and tightly linked, so a terrorist attack has the potential to produce manifold effects in multiple seemingly independent systems. This means that in modeling the nation’s infrastructures and assessing any threats against them we must take a methodical and coordinated approach, not only to exploring each system’s vulnerabilities but also to analyzing the overall picture.
Modeling and simulation are especially useful for these purposes, and they could make important contributions to counterterrorism research at both the macro and micro levels. At the largest scale, simulations might be able to reveal the vulnerability of whole infrastructures—and of networks of infrastructures. For example, the air transportation system depends heavily on fuel supplies (for airlines and for ground transportation for getting people and resources to/from airports), power (electricity for the airport concourses, ground maintenance, general lighting, and air traffic control), and communications networks; what happens when one (or two) of the elements are disengaged from the system? Many such examples exist: What exactly will the effects be on the transportation system if a major petroleum refinery is put out of commission? How severely will firefighting capabilities be limited if part of a city’s water system is shut down?
Even on smaller scales, modeling and simulation are important tools that can provide useful perspectives on how chemical plumes, radioactive fallout, or spore clouds might disperse through the air and how hazardous material spills might spread over land or in water.1 A particularly important area will be modeling relevant to bioterrorism, as there are a large number of potential biological agents, and a great deal of terror could be generated by a biological attack. Modeling can help examine how diseases would spread for a range of different incubation periods and transmission dynamics, as well as take into account key variables like climate, population, and migration. Understanding realistic as well as worst-case circumstances is essential. For this work, the expertise in building these kinds of
models and the knowledge of key input parameters is limited for human, animal, and plant pathogens, so increasing the pool of experts and performing research to determine how potential biological agents behave will be vital for planning efforts.
Finally, modeling and simulation can also be invaluable in disaster planning and training, allowing for principal players and staff to rehearse emergency procedures and gain experience in decision making under crisis conditions.
Many models and simulations already exist, but they would have to be modified to account for the different dynamics of systems, people, and social organizations under the difficult and unusual conditions of terrorism. These special needs stem in part from the diversity of potential agents and the numerous kinds of terror they generate and in part from the behavior of terrorists, which cannot usually be modeled as a probability distribution (as in conventional models), although the consequences of a terrorist attack do have stochastic elements. Still, many of the new systems-analysis tools could be dual use: The study of terrorist attacks might also be of value in better understanding medical, fire, weather, and other emergency situations. Conversely, critical assessment of previous acts of sabotage or other illegal forms of tampering, entering, or destruction of components of our infrastructure could be useful in developing case studies for training exercises and providing real-world data to validate new models. As a general rule, the essential elements of large-scale analyses, modeling, and simulation are well understood. However, useful output is very much dependent on a keen grasp of the physical system being simulated, knowledge of its most appropriate models, and access to reliable data—or at least reliable distributions of key variables. These needs are even more intense in analyzing the interconnectedness of systems. Because the simulations would be multidisciplinary, they would require considerable expertise across several domains, likely to be manifested in a sizable team of experts.
Such multidisciplinary efforts, at least in the past, have been easier said than done. While the current state of the art for the analysis and modeling of critical infrastructure is reasonably good, it is focused only on single aspects. For example, there are models of the electric power grid, models of various telecommunications networks, hydrologic models of river basins and dams, and so on. A number of modeling efforts have been funded by DOD’s Defense Threat Reduction Agency, and are currently under development, to analyze potential threats to critical infrastructures within the United States, particularly those used by DOD to support operations. However, models describing interactions among various dimensions of critical infrastructure are almost totally lacking. Research efforts are currently under way to develop such models, but these efforts are small and in their initial stages. Clearly, in the overall development of scientific and technological capabilities for countering terrorism—which will probably target multiple aspects of critical infrastructures—modeling the interactions among systems should receive higher priority.
Most U.S. government departments and agencies are not organized to assess terrorist threats, infrastructure vulnerabilities, and mitigation strategies from a systems perspective—at least not at present. But that could change. Various threat and infrastructure models must be developed, and used in combination with intelligence data, to perform analyses by which high-risk paths and associated attack access-points could be determined. Such results would permit formulation of effective threat mitigation strategies. And they could contain the seeds of their own improvement and contribute to threat prevention: Analysis-derived knowledge of the attack paths deemed to pose the greatest risk would in turn enable determination of what types of terrorist activity intelligence data should be sought.
Strengthening the government’s ability to execute the modeling and analyses described in this section depends not only on the application of existing capabilities to counterterrorism problems, but also on the development of new capabilities. A systems modeling and analysis research agenda would include a focus on system perspectives for homeland security, modeling and analysis of interdependencies among critical infrastructures, agent-based and system dynamics modeling, development of simulators and learning environments, and risk assessment and management from a multiobjective perspective, including risks up to and including potentially extreme and catastrophic events. (See Chapter 10 for more on techniques for systems analysis and modeling.)
INTEGRATED DATA MANAGEMENT
Modeling of the many diverse systems and infrastructures in the United States requires capabilities for managing data collected over widely different scales of space and time. The structural characteristics of power plants, pipelines, and reservoirs, for example, obviously do not change rapidly, while many commercial applications (such as energy trading) require real-time updates. Integrating such dissimilar data for the modeling and analysis of counterterrorism programs—themselves having highly time-critical components—is a major challenge.
Many data types must coexist in these applications. Necessary data include structured text (such as tables and system logs), unstructured text (documents), geographic features (maps, for example), time-series data (such as financial histories), video surveillance, and other kinds of data. Furthermore, these data can describe phenomena on very different spatial and temporal scales, from national levels and time periods of decades to very local phenomena with time scales of seconds and minutes. The system models must also be able to work on multiple levels of abstraction, selecting the level of detail in the data necessary for their particular applications.
Because commercial database-management systems currently do not address all of the above data types with reasonably high quality of performance, a
new generation of database-management-system technology will be required. The following issues are critical to establishing the relevant databases drawn from the multiple sources needed for counterterrorism system modeling and decision making:
Quantity and relevance,
Capabilities for data and database integration,
Data models and database management architectures, and
Ideally, the development of models for large-scale systems should work backwards: from an understanding of the nature of the desired results, through the model, back to the required data. Given the increasing level and sophistication of counterterrorism threats to the United States and the consequent importance of activities involving counterterrorism-related model development, it will be possible to initiate selected data-collection efforts for obtaining further information about critical infrastructures and other relevant systems. However, because of the cost and time required for data collection, future modeling efforts must rely (at least in part) on data sources originally designed to serve other purposes. The use of current data resources for counterterrorism, however, requires the development of significant capabilities for data filtering, quality control, and other procedures to avoid inefficiency and information overload.
In a similar spirit, one of the major applications of database-management systems for countering terrorism will be data mining—the analysis of historical and current online data, often from disparate information sources, to discern patterns. Much work remains to be done, however, before attaining that capability. Today’s commercial technology is highly dependent on clean, well-structured data, such as credit card transactions and cell-phone records, which might be scarce or nonexistent for suspected criminals and terrorists; thus the capacity to process other kinds of data will be needed. Moreover, nonstructured data such as text, images, and video are not especially well handled by commercial technology, although promising research in this area is currently under way. (For more discussion of data mining and information fusion, see Chapter 5, Information Technology.)
One major beneficiary of improved information management technologies would be the agencies responsible for gathering and analyzing intelligence data (including the FBI, CIA, and NSA). Currently one of their significant problems is managing a flood of data that may be relevant to their efforts to track suspected terrorists and their activities. There are well-known examples in which planned terrorist activities went undetected despite the fact that evidence was available to spot it—the relevant evidence was just one needle in a huge haystack. The use of sophisticated data-mining tools for the analysis of intelligence on nuclear smug-
gling and illicit weapons development programs will be particularly important in efforts to protect the nation from terrorist attacks using nuclear devices.
Another potential application of improved database systems is identification of trusted users of various systems. For example, in April 2002 the U.S. Customs Service launched the Customs-Trade Partnership Against Terrorism (C-TPAT).2 C-TPAT “requires importers to take steps to assess, evolve and communicate new practices that ensure tighter security of cargo and enhanced security throughout the entire supply chain. In return, their goods and conveyances will receive expedited processing into the United States.”3 The goal is to provide an incentive to shippers to improve their own security procedures. In this case, good data and data analyses are essential for understanding normal patterns of shipping—and, thus, to know who to trust and who to scrutinize more carefully because of unusual or suspect patterns.
A trusted-fliers program has also been proposed and has been advocated by Governor Tom Ridge, director of the Office of Homeland Security. Frequent airline travelers would provide information about themselves to enable the airlines or the government to perform a background check on them and to know more about the characteristics and circumstances of passenger traffic. The advantage to the “trusted” traveler in providing this information would presumably be faster processing through security checkpoints if the background check indicated a low risk. More important, the information provided by travelers, coupled with data from other public and private sources, could allow the airlines and security authorities to gain a better understanding of normal patterns of travel and to spot unusual and suspect combinations of passengers on single flights and on multiple flights.
Some skepticism about whether this sort of data mining program would be possible or effective has been expressed by Congress, TSA, and the airlines.4 Among the issues: What is the scope of the data that would be gathered? Who would be the users? What legal structures would protect the system’s integrity and limit the potential for misuse? There are also systems-level technical issues that would affect the implementation of such programs.5 To be sure, highly
More details about C-TPAT are available on the U.S. Customs Service Web site at <http://www.customs.gov/enforcem/tpat.htm>.
U.S. Customs Service press release of April 16, 2002.
See Miller, Bill. 2002. “Ridge Pushes Fast-Track ‘Trusted Fliers’ Screening; Lawmakers, Airline Groups Express Doubts,” Washington Post, p. A04, April 23.
The issues associated with identity systems in general are discussed in IDs—Not That Easy: Questions About Nationwide Identity Systems, Computer Science and Telecommunications Board, National Research Council, 2002. The issues will also be explored further in an upcoming CSTB report specifically addressing authentication technologies; see <http://cstb.org/project_authentication>.
sophisticated data management systems and decision-processing capabilities would be necessary to assemble and evaluate the needed data and to interpret and use the results. A goal of any of these trusted user programs would be to more effectively deploy screening resources, but good data management systems would be necessary to track the trusted users and to provide assurance that they really were trustworthy. Other new technologies, such as biometrics, might also be necessary to allow accurate identification of individuals who qualify as “trusted.” However, biometrics, as discussed later in this section, are far from foolproof; for example, physical characteristics vary with age, and the data are subject to the time and conditions under which they were gathered.
Also, data mining has major privacy implications. Efforts to address these implications and mitigate their negative aspects include data-mining algorithms that discover general trends without requiring full disclosure of individuals’ data records. Still, this zero-knowledge approach has limits. Attempts to identify terrorists could regularly require that an intelligence agency ask other government agencies and content providers for data on connections between individuals. (See Chapter 5 for more on privacy issues.)
Even in a nonterrorism context, data mining could save lives. For example, public health officials could collect and analyze real-time data describing admissions to hospital emergency rooms, monitor purchases of medications, inspect school-attendance records, and integrate this information with background information about the residence and job locations of affected patients both to pinpoint a biological outbreak and identify others at risk.
The development of database-management standards, though generally a lengthy process, is clearly needed. Such standards can be developed—possibly by industry/government agency consortia—if the members perceive sufficient value for their respective constituencies. In some cases, the government may assume funding responsibility. However, these standards efforts may not be successful if they are not well aligned with commercial markets, whose evolution—for the understanding of linked critical infrastructures and operational systems—would be a significant step toward developing data-collection systems and standards for counterterrorism applications.
SENSORS AND SENSOR NETWORKS
Because homeland defense against terrorist-delivered weapons of mass destruction will involve the entire spectrum of military and federal, state, and local government personnel, as well as volunteer organizations, the scenarios under which sensors will be needed and the protocols for their use may be as varied as each group’s specific mission. The DOD and DOE have long been active in developing sensors, but these devices were intended largely for the protection of battlefield troops and the units that support them.
There are some important differences in the basic characteristics of military-
battlefield sensors and those for homeland defense. Established procedures, pre-engagement vaccination, and protective gear are well defined for the military battlefield scenario, but with the exception of some emergency response personnel, these are virtually nonexistent in the civilian sector. Further, military operations are generally conducted with the benefit of some intelligence data, giving some a priori specificity to the type of chemical, biological, or nuclear threat likely to be encountered. By contrast, terrorist use of weapons of mass destruction is less predictable. Finally, military operations may tolerate exposure levels that hurt but do not cripple unit effectiveness, whereas protection of the health of the civilian population to the maximum extent possible is a political mandate.
Nevertheless, sensors developed for battlefield detection of chemical, biological, and nuclear weapons represent a good starting point. But to meet the needs of homeland defense, it will be necessary to have sensors that provide the lowest achievable false-alarm rate, operate against the widest possible number of agents, and offer significantly improved sensitivity, specificity, and area coverage.
Because chemical, biological, and nuclear weapons each pose different threat scenarios, differences in sensors and their operational protocols must be considered.
Chemical weapons are point- or area-release, and their health impacts are generally seen immediately. However, they may be detectable before actual deployment. Trace amounts of chemical contaminant can be detected on the package containing the weapon and even on the individual transporting it. Current sensor capabilities are fairly limited; in many cases, the best “technology” for practical use continues to be trained dogs, which provide broad-spectrum high-sensitivity sensing. Manufactured sensors are often designed for use in specific environments and to be selective for only one or two chemicals. The development of new sensor systems for chemical agents will require advances in a number of different subsystems, including sample collection and processing, presentation of the chemicals to the sensor, sensor arrays with molecular recognition, sophisticated signal processing, and amplification of the transduction events. The precise chemical signals that provoke responses in dogs remain uncertain, and basic research to study how animal species accomplish both detection and identification of trace chemicals could yield new concepts for manufacturing better sensor systems. (See Chapter 4 for more on chemical sensors.)
Biological weapons can also be point- or area-release, but their health impacts may not become apparent for days or weeks. Further, it is problematic whether trace amounts of a biological agent will be discernible, so that the first opportunity to detect it may be at release. Thus the rapid diagnosis, treatment, and recognition of the weapon that caused the illness is very important. Equally important is the flow of rapid and reliable information throughout the health-care community, particularly in the early stages of recognition of a bioterrorist attack.
The classic means of surveillance of biological agents is to identify patients
with an unusual disease or syndrome and to then establish the nature of the pathogen by standard laboratory diagnosis. Physical sensors that screen for aerosolized particles, and molecular probes that establish the nature of the organism, would complement the classic process and permit quicker analyses. There is also the possibility of symptomatic surveillance—real-time screening in hospital emergency rooms of syndromes such as flulike illness, diarrhea, and rashes and spots. By feeding such data into sophisticated computer models, it may be possible to detect subtle fluctuations in symptomatic admissions, suggesting that something above the background rate of illness, such as a bioterrorist attack, is occurring.
One of the most exciting possibilities for early detection of a biological outbreak is preclinical diagnosis. With the elucidation of the DNA sequence of the human genome, it may be possible to examine selective patterns of gene mutation induced by different biological agents in humans long before the actual organism has been detected. As we learn more about the pathogenesis of different agents and the specific bodily responses mounted against them, it may turn out that each pathogen induces a unique molecular signature in the host gene-expression response. Thus, using DNA chips, it may someday be possible, without ever having to culture suspected agents, to know what type and perhaps what species we are encountering—and to commence focused and rapid treatment accordingly. (See Chapter 3 for more on detection of biological outbreaks.)
An important line of defense in a layered system of homeland protection is the detection and interdiction of illicit nuclear weapons and special nuclear material (SNM), as well as the detection and disruption of illicit weapons development programs. Sensors and sensor networks can contribute to this defense effort by providing technical means for detecting the movement of SNM, especially highly enriched uranium (HEU), either in weapons or as contraband, through border transit points and around critical U.S. assets such as ports, cities, and other high-value facilities. A national detection network could consist of several types of sensors: large numbers of simple counters that indicate the presence of radiation, backed up by smaller numbers of spectroscopic instruments to identify specific isotopic signatures. The technical challenge for the deployment of both types of sensors is the differentiation of signals of interest from the background of naturally occurring radioactivity and medical/industrial radioisotopes.
The presence of certain types of penetrating radiation is a signature of most (but not all) SNM. Passive detection of gamma rays and/or neutrons can be an effective technique in some circumstances for revealing the presence of illicit SNM or improvised nuclear devices (INDs), though passive monitoring of these materials would require large-area detectors for acceptable sensitivity. In other cases, active interrogation methods using neutron detectors and pulsed neutron sources may be required. Active systems are more complex and costly than passive detectors. Additionally, some materials (those with high atomic number) can be detected indirectly by gamma radiography. While shielding of SNM can
interfere with the signals produced by all of these detection methods, the systems could still serve as a useful first indicator of a wide spectrum of potential threats. In the near term, improvements in neutron interrogation sources (i.e., neutron generators) and detectors for HEU would be a very useful step toward increasing our detection capabilities. (See Chapter 2 for more on sensors of nuclear materials.)
In addition to detection of chemical, biological, and nuclear agents or weapons, sensor systems can also be used to produce images. In particular, remote sensing technologies, such as light detection and ranging (LIDAR), synthetic aperture radar (SAR), and high-resolution satellite imagery, can be used for surveillence or during emergency response and cleanup efforts.6
Whatever type of attack the sensors are designed to prevent or respond to, the roles that sensor systems play can be described in terms of four specific categories—threat warning; incident response; treatment; and recovery and attribution—each with its own set of requirements:
Threat warning covers point-of-entry monitoring for preattack detection, as well as area monitoring of presumed target areas. Simply because the number of sensors required for area monitoring is great, it is necessary that they be low-cost, small, fixed in place, and highly sensitive (as opposed to selective). Also, maximum utility from area monitoring will require networking the sensors, thereby allowing for higher-level evaluation of a potential threat.
Incident response scenarios, by contrast, require handheld portable sensors and minimal training for operators. Both point sensors (for site characterization) and short-range standoff sensors (for site evaluation prior to entry) will be of value. Incident response will occur at a critical time for evaluating and controlling the severity of the attack, but this will also be the time of weakest coordination as personnel from federal, state, and local governments come onto the scene. A mechanism for networking data from sensors carried by these people would allow a single picture of the threat to evolve more quickly.
For treatment, the sensors’ greatest contribution will be made in the aftermath of a biological attack. They should be able to provide quick and accurate diagnoses, without the hours or days of time lag associated with standard culture-growth techniques.
For recovery and attribution, the speed at which information is available is usually less important than the accuracy of the data. For recovery, sensors would be useful for monitoring the level of contamination at a site during and after cleanup activities. For attribution, the goal would be the use of sensors in
forensic investigations to determine the source of a terrorist attack or to assign responsibility.
Recent research and development, focusing most heavily on portable sensors for chemical and biological agents, has followed two basic paths. The first is a repackaging of standard laboratory-analysis techniques for field use, and it includes various methods of spectroscopy. The second basic path has been in the introduction of new affinity-based sensors, in which the chemical or biological agent is selectively bound to a surface through use of a specialized surface coating; the presence or absence of the agent on the surface is then measured by one of several mechanical, electrical, or optical transduction methods. The sensitivity, selectivity, quantification, and time response of these affinity-based sensors are functions of the specialized coatings and signal-transduction methods used.
Spectroscopy methods—the first path—tend to be more general-purpose, with a single instrument being useful for detection of a number of agents. In contrast, to use affinity-based instruments for detection of multiple agents, an array of sensors is needed where the elements of the array receive a variety of coatings, each specialized to allow detection of a specific chemical or biological agent.
Either way, to carry sensor-system performance to the level needed, homeland defense will require not only continued improvement in basic sensor performance but also a better definition and understanding of overall performance—when many sensors are networked together. A number of factors will contribute to effective functioning of sensor networks. Communications protocols will be needed, and network architecture issues associated with connectivity, bandwidth allocation, signal processing, and data fusion must also be addressed
In particular, algorithms for detection in the presence of significant clutter must be developed, with a focus on achieving excellent detection capability while minimizing false alarms. In many instances, the impact of false alarms will depend on circumstances. The trade-offs between false positives and false negatives and the consequences of each must take into account how the system can be used most effectively. Issues will include the system in which the sensors are installed (e.g., Are there backup or alternate security checks?), the users of the outputs (e.g., first responders, scientists supervising recovery efforts), and the time scales on which decisions about what to do with the results must be made.
The next important step is to address the detection of weapons of mass destruction from a systems-engineering perspective, which spans the capture/ collection of the sample, preparation of the sample, reliable delivery of the sample to the sensor, sensor interrogation (including background and metric verification), analysis of the signal, and reporting of the data from individual sensors. This perspective can be enhanced to include redundancy issues and other performance enhancements achieved from multiple networked sensors. Several other attributes will accrue from this system-design approach:
Establishment of standards—covering response time and field stability/ durability, for example—for detection of weapons of mass destruction;
Use of two-level sensor systems in which a low-false-alarm-rate sensor—one with low specificity—triggers a second sensor with a higher false-alarm rate but high specificity;
Use of multiple sensors and reasoning algorithms to obtain lower overall false-alarm probability, predict contamination spread, and provide guidance for recovery actions; and
Use of networked sensors to provide wide-area protection of high-threat targets.
Also important is the continued development of individual sensor modalities. Significant work on chemical and biological sensors in particular is a relatively recent phenomenon. As these efforts proceed and as new data-analysis algorithms are applied to sensor outputs, improvements may be expected in many of these instruments’ sensitivity, selectivity, false-alarm probability, size, power, and cost. In addition to the need for continued basic sensor work for point-of-entry monitoring and incident-response applications, equally critical technological and economic challenges will involve developing affinity-based sensors that can be cost-effectively networked to provide wide-area monitoring.
AUTONOMOUS MOBILE ROBOTIC TECHNOLOGIES
Robotic technologies can impact all phases of counterterrorism, including detection, prevention, and response. Robots’ abilities to sense and manipulate the environment with great precision, in the absence of such human limitations as physical vulnerability, fear, boredom, and discomfort, make them ideal tools for extending operational reach. Robots can serve homeland-defense missions (including surveillance and protection of population centers, facilities, and assets, and rescue or cleanup in response to an attack) as well as tactical/offensive missions (such as intelligence collection, demining, and direct action). (See Chapters 4 and 8 for more on possible counterterrorism applications of robotic technologies.)
Ground robots may be loosely described as small (<50 lb), medium (51 to 1,000 lb), and large (>1,000 lb). Small robots are light and compact enough to be carried by humans, and it is expected that their inherent ease of handling, transport, and relatively low cost will result in their proliferation. Several small-robot prototypes have been developed under the DARPA Tactical Mobile Robotics (TMR) project and other government programs. Though their small size severely limits their operating range, duration, and mobility in outdoor or unstructured terrain, they are critical for reaching otherwise inaccessible spaces. Applications for such robots include intrusive intelligence-gathering missions (in which small size is critical); area sampling for nuclear, biological, and chemical contamination; close-in surveillance; and urban search-and-rescue operations.
Medium-size robots have greater mobility, energy reserves, and space for additional hardware such as sensors, manipulators, communications gear, and payloads. These robots are transportable by light vehicles—including pickup trucks, vans, small trailers, and high-mobility, multipurpose wheeled vehicles (HMMWVs)—that would be widely available to many potential users. Their current applications include explosive-ordnance disposal (with dedicated manipulators and payloads for removing or disabling unexploded devices), physical security (asset/facility monitoring), hazardous-waste inspection/remediation systems, and law enforcement operations. New initiatives under the DARPA/Army Future Combat Systems and Office of Naval Research Gladiator programs suggest that in the next 5 years vehicle platforms of this size may also serve as forward scouts, sentries, surveillance and target-acquisition platforms, communication relays, resupply/logistics vehicles, and even firing platforms.
Large robots will also have value for counterterrorism missions. Teleoperated or semiautonomous, they can be used for mine clearing, obstacle breaching, construction, fire-fighting, and rubble removal, particularly in areas contaminated by chemical, biological, or nuclear weapons.
The ability of a robot to perform a specific mission will depend on the robotic system’s level and “distribution” (whether on-board or off-board) of autonomy. These factors depend, in turn, on the expected integrity of the operator– robot communications link, the maximum length of time the robot might be out of contact with the operator, the robot’s knowledge of its location in the world and with respect to the operator, the robot’s knowledge of its internal health, and the robot’s knowledge of its environment.
The basic types of system autonomy include:
Teleoperated systems, which primarily use the intelligence of a human operator to operate the system during execution of a mission;
Scripted autonomous systems, in which the guidance, navigation, and control (GN&C) systems are typically autonomous but the mission profile is significantly constrained;
Supervised autonomous systems, which include a human operator (via a communications link) who assists in the interpretation of sensor information and provides situational awareness and mission guidance to the robot; and
Intelligent autonomous systems, which use robot-embedded software for incorporating many of the attributes of human intelligence.
From these basic descriptions, the level of robot autonomy can be viewed as a composite of the level of guidance and control, the level of autonomous planning and tasking, the level of situational awareness (i.e., perception), and the level of self-awareness (diagnosis).
The level of guidance and control is characterized by the degree to which a robot has the ability to create a desired motion without human involvement. Where teleoperation of a robot assumes a “drive” camera and communication
link for direct operator control, autonomous/adaptive GN&C allows the robotic system to automatically adjust to changes in the robotic-system configuration (e.g., mass properties, failures) or changes in the operating environment (e.g., obstacles, lighting conditions). Additional research in guidance technology is required to enable autonomous systems to perform at levels similar to what is achievable by a human operator or pilot when given the same degree of situational awareness.
Robot planning and decision making are characterized by the extent to which the robot can plan its mission activities, motion, usage of payloads, and specific goals to be achieved without human involvement. This must be accomplished within certain limits, which may include specific mission rules and constraints on robot consumables (e.g., power, fuel, memory). Most common today are systems that plan robot routes (path planning) or schedule devices (automated scheduling). Several government S&T programs have demonstrated either dynamic path planning (that is, in environments without fixed infrastructure) or automated, continuous device scheduling. Activity planning, which involves the coordination of multiple robot subsystems, is a critical research area for the future.
Robots typically communicate data to a central command-and-control site via uplink and receive commands via downlink. Teleoperated robots have requirements for high-bandwidth links (including video), while semiautonomous robots do not. For systems of cooperating robots, the need for maintaining reliable network connectivity will be critical. Point-to-point links are defined by operating frequency, data rate, range, transmitter power, and receiver sensitivity. The optimal choice of frequencies used in the point-to-point links will be environment-dependent and must be traded off with other factors, such as range and data rate. In any case, robot control links must be robust in the presence of multipath interference. A variety of strategies for mitigating multipath interference exist, including spread spectrum. For tactical applications, communication links must also satisfy detection, interception, jamming, and encryption requirements.
Where groups of robots must collaborate, base-station based and peer-to-peer networks can be considered. A base-station-based architecture is characterized by a number of nodes communicating with a central hub. Peer-to-peer mobile ad hoc network (MANET) architecture may be more appropriate for dynamic environments (characterized by moving robots). MANET architectures are reconfigurable over time and space and do not have a single point of failure.
SUPERVISORY CONTROL AND DATA ACQUISITION SYSTEMS
Specialized computer software systems, known as supervisory control and data acquisition (SCADA) systems, are widely used to control many essential real-time processes, including the generation and distribution of electric power, the management of oil and natural gas pipelines, and the monitoring of engineer-
ing systems in buildings, petrochemical facilities, and manufacturing plants. But today’s SCADA systems have been designed with minimal attention to security. For example, data are often sent in the clear, and protocols for accepting commands are open, with no authentication required. Control channels are often wireless, or they are leased lines that pass through commercial telecommunications facilities. Thus there is little protection against the forgery of messages. And data corruption—not unlikely in these SCADA systems, much of whose technology is old—could be entirely crippling.
In addition, because deregulation has meant placing a premium on using existing capacity more efficiently, interconnections to shift supply from one location to another have increased, making SCADA systems more indispensable than ever. As one example, the electric-power grid has become more heterogeneous in terms of the number and types of power-generation devices—solar cells, microturbines, and many other sources all contribute to the network from far-flung locations. Thus, problems of distributed dynamic control in a complex, highly interactive system, controlled in real time, have become major issues in operating the power grid reliably, even under routine conditions.
Making the present systems more secure, moreover, is not simply a question of installing additional layers of technology. Given the real-time nature of SCADA, timing is quite important to system performance and optimal efficiency; operations can demand millisecond accuracy. But security add-ons in such an environment can complicate timing estimates and severely degrade SCADA performance.
Several issues must be addressed in the effort to improve the security of SCADA technologies. First, there is a need for much additional research and modeling on the existing SCADA systems, especially those that monitor networks such as pipelines or power grids, in order to understand their vulnerabilities. Some of this modeling and analysis must be undertaken by the operators themselves, and indeed this has begun since September 11; the chemical industry, for one, reports that SCADA systems in refineries have been under review. There is also a role for government at both the national and state levels—for example, in detecting vulnerabilities in present systems through comprehensive gaming (red teaming) analysis.
Second, investments will have to be made if existing SCADA technologies are to be upgraded and new ones deployed. Federal and state governments should offer incentives that encourage the appropriate private sector investments.
Third, the government must work with industry associations on standards that will enhance both the technology and its security. The National Institute of Standards and Technology, which has long played such a role at the federal level, should lead this effort.
Every society exists somewhere on the spectrum between complete openness and total restriction of behavior and movement. In the United States, we are proud of our society’s extremely open nature, but that asset is also a basic element of its vulnerability to terrorism.
An obvious solution is increased physical and information-technology security, though the appropriate level of security should not be uniform throughout the country. It would depend on the type of facility or system being guarded, the potential damage if an intrusion occurs, and the degree to which security interferes with effective functioning of the system. Clearly, the rules for nuclear power plants should be different from those for buses.
One developing set of technologies that could play a role across the board—ranging from major to minor, depending on the specific case—is biometrics. In authorizing participants in any particular system—physical or IT security alike—biometrics may provide alternatives to picture IDs, magnetic entry cards, or passwords.
Biometrics uses behavioral and physiological characteristics—including fingerprints, irises, written signatures, faces, voices, and hand shape—to authenticate the identity of an individual. These characteristics are distinctive but not necessarily unique, and they can vary over time and conditions of collection and may change with medical condition, advancing age, or the onset of puberty. Still, biometric identification may provide a higher level of confidence for the authentication of identity than can devices such as passwords. And, as opposed to other authentication tokens that might be used (such as keys), biometric measures cannot easily be stolen or mimicked. However, biometrics must be part of a multifactor authentication scheme rather than a one-stop solution. Biometric authentication is most applicable to sensitive applications in which the security risk of a false positive (an imposter being accepted as legitimate) is much higher than that of a false negative (an authorized individual being rejected as illegitimate). Several U.S. government projects are currently aimed at improving the distinctiveness of individual measures and exploring “biometric data fusion” for combining multiple measures. Such advances would allow for almost one-to-one mappings of measure sets to individuals, making the technology exceedingly reliable but also more subject to privacy abuses.7
On a less invasive level, biometrics at more or less its present state could enhance the protective value of more traditional security systems. While the technological elements behind barriers, fences, locks, perimeters, and other physical ways of safeguarding a location—as well as nonphysical approaches such as
Authentication technologies (including biometrics) and their implications for privacy will be explored in depth in a forthcoming CSTB report from the Committee on Authentication Technologies and Their Privacy Implications; see information available online at <http://cstb.org/project_authentication>.
background checks—may not be new or exciting, they complement approaches such as biometrics. The joint use of traditional and newer technologies might thus allow exploitation of the latter while minimizing their need for potentially intrusive refinements.
HUMAN AND ORGANIZATIONAL FACTORS
The organizing principle of this report is that our nation’s store of scientific and technological knowledge—as it exists and as it must be improved—is a key resource in efforts to counter the threat of terrorism. This knowledge is the basis for effective intelligence and military operations against terrorism, for securing our borders and other points of entry, and for making inaccessible the many targets of terrorist activities.
However, technology is not the sole solution to any problem. Virtually all technologies—including those discussed in this report—are subject to the reality that human agents and social organizations are necessary to implement and operate them. Decision makers oversee warning systems, human agents administer detectors, relief efforts following chemical or biological attack require the collective efforts of the nation’s health machinery, and precision warfare is a highly orchestrated human activity. A key aspect in the effective deployment of any of the technologies discussed in this report is the ease and effectiveness of use of information and other technical outputs by the people they are intended to support. Thus design and deployment of the systems must take human, social, and organizational factors into account.
In efforts to counter terrorism, the human interface with technology appears at three junctures:
Those who are recruited to administer the technologies of detection, prevention, and response to attack not only have to be expert but also trustworthy and loyal. Few forms of sabotage are more effective than sabotage from within. Guaranteeing this side of security, however, can become a matter of government compulsivity and a potential source of inefficiency and ineffectiveness. Some kind of equilibrium, which takes into account both the value of prudence and the dangers of overkill, is required.
All types of counterterrorism-related technological systems require the mobilization of organizational machinery. In many cases, their missions take place under crisis conditions, which multiply the probabilities of accidents, breakdowns of communication, lack of coordination, errors of judgment, and jurisdictional conflicts. There is no sure cure for such failures, but advanced training and instruction of agents, as well as comprehensive planning for contingencies and backup strategies, are essential.
Sometimes the applications of science and technology in the interests of security run counter to cherished individual and political values. Wholesale
detection efforts at airport terminals and other hubs of transportation are simultaneously experienced as comforting and as costly, inefficient, irritating, and invasive. The use of high-tech identifying and truth-detecting devices may have similar alienating effects. Surveillance of telephones, credit records, and personal movements in the interests of security also raises serious questions about privacy and civil liberties. The systems perspective that should be used to determine criteria for deployment of technologies must embrace this reality as well; there are many ways to remedy the vulnerabilities of our nation’s critical infrastructures, and the best solutions must reflect a balance between the desire for security and human values.
Often, the weakest part of the system is the (frequently neglected) human link. Overlooking the human element can make it more difficult for staff members to do their jobs and, ironically, significantly reduce the effectiveness of the security technologies. In the worst case, the entire system may be rendered useless. Thus, human-centric design and an improved understanding of the factors that contribute to systematic human errors are essential.
Most people are inherently helpful and dependable and are responsive in the face of unforeseen circumstances. We must take into account their strengths—the attributes that no technology could duplicate—while avoiding, to the maximum extent possible, the creation of jobs that are tedious and unrewarding. This must be a basic element of our systems approach. We need to allow for defense in depth (multiple layers) to compensate for human error, of course, but good system design should be characterized by human roles in which vigilance and interest are heightened, thereby making errors less likely.
Such human factors in design must apply equally well to the operators of the security system and to those who encounter it.
COORDINATION OF PROGRAMS ON CROSSCUTTING TECHNOLOGIES
The nation’s capabilities for pursuing an expanded and coordinated S&T agenda for the crosscutting technologies identified in this chapter are considerable. A number of programs with broad applicability to these technologies have already been established within DOD, DOE, NSF, and NASA, and relevant research is under way at these agencies, in the national laboratories, and at scores of research universities. For example, in recent years, as concern about terrorism has grown and as the post-Cold War powers have focused on safeguarding nuclear materials, the DOE national laboratories have already begun researching sensors and other detection technologies, as well as data management, visualization, and modeling pertinent to counterterrorism. The DOE laboratories also have expertise in both the physical and biological sciences, as is needed for such crosscutting R&D initiatives, and are performing advanced work in the key fields of
information technology and nanoscale science. Similarly, important and relevant activities are occurring throughout other government agencies.
A mechanism is needed for coordinating all of this work in crosscutting areas across agencies. The logical approach would be to use a National Science and Technology Council (NSTC) subcommittee. The NSTC was established in 1993, and one of its objectives “is the establishment of clear national goals for Federal science and technology investments.”8 Subcommittees of the NSTC are often formed in areas such as climate change, biotechnology, and nanoscale science, engineering, and technology, where multiple agencies need to work together toward a common set of goals. Such subcommittees make decisions about programs and provide OMB with the information necessary to produce budget crosscuts showing the amount of resources and types of programs devoted to a specific area across the federal government.
Recommendation 11.1: The National Science and Technology Council should establish a Subcommittee for Counterterrorism Research and Development to, among other tasks, coordinate federal work on crosscutting technologies such as modeling and simulation, data management, sensors and sensor networks, and robotics. The subcommittee should have participation from the highest levels of the relevant agencies.
This chapter has outlined the potential impact of seven crosscutting areas—systems analysis and modeling, integrated data management, sensors and sensor networks, robotic technologies, SCADA systems, biometrics, and human factors—on counterterrorism efforts. The realization of this potential will depend on a program of directed basic and applied research and will require an expansion and coordination of existing S&T programs and funding if the government’s work is to produce effective tools for countering terrorism and ensuring homeland security.
There are three problems with the current level of effort. First, it is too small. It is clear that solutions for current vulnerabilities and the ability to tackle future problems lie in innovations and discoveries in the biological sciences, physical sciences, and all fields of engineering, as well as at the interfaces of these disciplines and in the relevant social sciences. Therefore a balance of investments is critical, across different time horizons as well as across numerous disciplines. The government’s underinvestment in the physical sciences and engineering has
National Science and Technology Council Web site: <http://www.ostp.gov/NSTC/html/NSTC_Home.html>.
A second problem with the current level of effort is its focus. Programs are tied to the existing missions of the agencies, as is appropriate. This means that while some of the R&D may be applicable to the technologies for homeland security, the present federal effort does not add up to the research and development program that is needed. In robotics, for example, NSF has long had a relatively small program conducted at several universities and focused on fundamental research. NASA has funded robotics R&D that supports its missions in space. DOD has invested heavily, through the individual armed services and DARPA, in unmanned aircraft for sensing and surveillance. Historically, DOE’s efforts in robotics have been associated with nuclear materials handling, although recently the agency’s laboratories have initiated some substantial programs that may contribute to improved homeland security in many ways. Private-sector investments in robotics follow a similar pattern—for example, the automotive companies are investing in robotic R&D that will support their production and assembly lines, and energy and water providers are developing robots useful for monitoring fuels pipelines and aqueducts. The work under way is productive and important new technologies are being developed, but even added together, these public and private investments will not produce robots that can be adapted and deployed for many purposes in homeland security—such as surveillance, detection, and postdisaster monitoring and recovery.
The same pattern—R&D investments that are significant but not directly focused on homeland security needs—exists in the other areas of crosscutting technologies and techniques discussed in this report. Each agency has molded its programs in the context of its own objectives.
The third problem with present R&D efforts in these fields is that the programs are directed to issues largely in the domain or purview of the federal government—defense, space, and nuclear security and stockpile maintenance being prototypical examples. The crosscutting R&D efforts that will serve home-
Data on and analysis of the federal budget for science and technology are available from a number of sources, including National Research Council, Board on Science, Technology, and Economic Policy, 2001, Trends in Federal Support of Research and Graduate Education, National Academy Press, Washington, D.C.; Committee on Science, Engineering, and Public Policy, National Research Council, 2001, Observations on the President’s Fiscal Year 2002 Federal Science and Technology Budget, National Academy Press, Washington, D.C.; American Association for the Advancement of Science, 2001, AAAS Report XXVI: Research and Development FY 2002, American Association for the Advancement of Science, Washington, D.C.; American Association for the Advancement of Science, 2002, Congressional Action on Research and Development in the FY 2002 Budget, presentation materials from the Alliance for Science and Technology Research in America (ASTRA), Washington, D.C., available online at <http://www.cra.org/govaffairs/budget/astra.pdf>; and National Science Board, National Science Foundation, 2002, Science and Engineering Indicators—2002, U.S. Government Printing Office, Washington, D.C.
land security require collaborations with end users at the state and local levels of government so that programs can take into account the needs of these users, like technologies for first responders. Further, federal programs must be designed with an understanding of the critical role industry will play as a developer, producer, and user of counterterrorism technologies. Important questions include who the consumer of these technologies will be, whether there will be a commercial market for new products, and what role government procurement can productively play.
Despite these problems, the nation’s research system, with vast and diverse capabilities spread among universities, national and federal laboratories, and industry, provides a unique infrastructure and sound basis for mounting aggressive programs in the kinds of crosscutting R&D discussed in this chapter. The challenge for government leaders is to harness this capacity for the creation of a greatly expanded and coordinated national S&T agenda for counterterrorism. This will require a commitment to providing significant new funding and to sustaining the programs over a number of years.