Background Material for Chapter 3
Modern communications networks are composed of many links: satellite communication channels provide high-bandwidth pathways within and between continents, cable and fiber channels provide secure high-bandwidth channels over long distances, and radio links of various kinds serve to connect personnel and equipment over short, medium, and long ranges. Each type of link has its own advantages and disadvantages and is vulnerable to different forms of exploitation and attack. A variety of reports published by the National Research Council (NRC) and others discuss these and related topics (NRC, 1991, 1996a,b, 1997, 1998, 1999, 2001, 2002).
Communications networks using the Internet Protocol (IP) stack form the basis of network-centric warfare. While IP networks can operate using a heterogeneous set of techniques for moving the data, the protocols are common to all technologies and present a common set of vulnerabilities that might be exploited by an adversary. Attacks on the protocols themselves, or attacks on the environment based on assumptions made by the protocols, leave them vulnerable to disruption. Attacks on the protocols are well known and have been described in the literature on networking, as have attacks based on the way the protocols operate (e.g., denial-of-service attacks).
The ability to secure communication channels with encryption is clearly a crucial capability. Without this protection RED forces would be able to intercept or exploit information relayed between various divisions of the armed forces, much as the Allied forces were able to intercept Axis radio transmissions during World War II. It is already the case that certain types of encryption technology are subject to stringent export controls under existing laws. However, much of the basic knowledge about cryptographic schemes is commonly available in the public domain. Cryptographic research has been described in the open literature along with the revolution in computing technology that has enabled it.
The newest encryption standard, Rijndael, or the Advanced Encryption Standard (AES; FIPS-197) replaces the Data Encryption Standard (DES) that is now regarded as vulnerable (NIST, 2000). It is important to note that the government did not develop the data encryption algorithm that will be used for most data. The technical skills required to implement secure communication schemes are taught in
undergraduate computer science courses and so are available in many parts of the world. This implies that opposing forces could readily encrypt their own transmissions if they chose to do so. Such encryption can also be expected to be very strong, and difficult or impossible to break using known methods. Programs such as Pretty Good Privacy (PGP) and Gnu Privacy Guard (GnuPG), which can be obtained over the Internet by anyone, provide essentially unbreakable security if used properly.
Computational systems are used to process the data gathered by sensor systems and human agents and to produce information that can be used by decision makers in command centers and on the field. As the armed forces become increasingly networked and more data become available online, BLUE forces will rely increasingly on computational systems to sift through the available information to provide situational awareness and to identify patterns. Computational systems are also used to automate difficult or tedious decision processes. Logistic operations, for example, can be optimized through the use of automated planning and scheduling systems.
The economies of scale and competitive pressures in the commercial computer sector have produced a situation where processing power has become a commodity. Powerful 32- and 64-bit microprocessors are produced at low cost both domestically and internationally. One result of this trend is that it has become much simpler for other nations to acquire state-of-the-art computational capabilities. Consider the fact that the majority of the supercomputers in the Top 500 listing are composed of collections of standard microprocessors lashed together with high-performance networks.
It will always be the case that the most demanding computational applications such as image interpretation, automated language translation, data mining, and so on will fuel the drive for ever increasing processing power. However, the skills and components required to construct powerful computational clusters are now widely available internationally.
It is not only access to high-performance computing that is a concern. Increasingly, low-power electronics is an area of active research and development that will be especially important in the area of sensor networks. To provide increased intelligence at the sensor and to reduce the demands on the network, a significant amount of processing will have to occur at the sensor; such processing power is enabled by low-power electronics.
As the price of computing hardware has dropped, the relative importance of software has increased. At this point, some of the most significant technical challenges in implementing the vision of the Future Combat Systems program center on the issue of developing reliable software systems that can coordinate distributed networks of sensors, actuators, and computers into a seamless whole. This task is complicated by the fact that the systems are expected to work in a dynamic environment in which elements may be added or removed unexpectedly and communications are not assured. In this regard, research and development being carried out in distributed systems, grid computing, and sensor networks should be viewed as germane to the military context.
The ability to produce and maintain sophisticated software systems relies on the availability of skilled personnel, programmers, analysts, testers, and others. Here again, it is the case that human resources are available internationally. China, for example, currently graduates five times more engineers than does the United States. The Indian city of Bangalore now has more technology jobs than Silicon Valley. In the face of current worldwide trends, it is unlikely that BLUE forces will have a significant advantage in terms of their ability to design, deploy, and operate the computational infrastructure required to support information collection and exploitation. The number of trained software engineers is declining in the United States but is increasing rapidly in countries in Asia.
SENSING AND SENSORS
Information dominance hinges on the ability to collect tactically relevant information in a timely manner. This information can be derived from a variety of sources. Standoff sensors mounted on satellite platforms or aircraft provide a relatively noninvasive means of assessing the tactical situation in remote locations. A variety of sensors have been successfully deployed on these platforms, ranging from passive imaging sensors that can collect measurements in a range of spectral bands to active sensors that can be used to identify camouflaged vehicles or to produce accurate elevation maps of remote sites. As useful as these standoff sensors are, they can be confounded by bad weather, limited resolution, and camouflage.
Radar systems are also an important component of situational awareness and are commonly used to identify and localize aircraft and to provide a comprehensive understanding of the airspace in the theater of operations. Synthetic aperture radar (SAR), for example, has proved useful in a variety of applications.
On the battlefield, useful information can be derived from a variety of sensors, including chemical sensors, cameras, thermal sensors, and acoustic sensors. It is becoming increasingly attractive to consider deploying collections of small, low-power computational elements equipped with sensors and wireless communication systems that could provide information about a given area of interest. A compelling example is the recent development of acoustic sensor arrays for the detection and localization of sources of gunfire. Also beginning to become available are inexpensive biological sensors. The development of DNA microarray chips, for example, promises to provide inexpensive, rapid identification of biological samples.
A large, redundant multiplicity of sensors is typically deployed to ensure the information dominance on which U.S. forces are now heavily reliant—ranging from space-based sensors to those deployed on unmanned aerial vehicles (UAVs) to smart dust. It is often the case that sensor systems will be deployed in conjunction with communication systems that are used to correlate information obtained in various locations or to transmit sensor information to decision makers in command centers or on the field. Relevant information can also be derived from radio systems that can be used to intercept transmissions or to pinpoint the location of transmission stations, radar installations, or jammers.
Remote sensing is watching and listening to the actions of the enemy from a distance. Examples include radar (watching using electromagnetic waves) and sonar (listening using acoustic waves) and can be further divided into active and passive categories as discussed below. The basic functionality is to stand off from the action and monitor the battlespace or the environment for telltale signs of enemy activity. Sensors operate in the range from ultraviolet (UV) to radio frequency (RF) as elucidated below.
Active sensing involves sending out a probe and monitoring a response. The classic example of active sensing is radar—a pulse is transmitted and various modalities of scattered or reflected energy are monitored. The information return can be very rich, as in the case of spectroscopic probing of, for example, molecular species in the infrared (IR) or biological species with a UV excitation. In optimal circumstances, active sensing provides enhanced sensitivity and/or range compared with passive techniques. The disadvantage, of course, is that the enemy can also see the associated emissions and can, in the worst case, direct fire to destroy the sensing capability—and possibly its operators.
Passive sensing relies on detection of natural emissions, such as thermal emissions, muzzle flashes, rocket or airplane exhaust, and so on. Passive sensing is more likely to be covert; there
are weak or no emissions from the detection apparatus in comparison with active sensing. The disadvantage is that the signals are weaker and usually exhibit lower specificity and/or discrimination than that achieved with active sensing. It may be possible to opportunistically take better advantage of ambient signals already available in the environment, such as those from television and radio stations that radiate at high power and at known frequencies. With more sophisticated signal-processing techniques, it may be possible to use these sources in much the same way that active sensing would be used.
Synthetic Aperture Radar
Synthetic aperture radar (SAR) is an electronic imaging technology that was introduced in the early 1950s. Its invention is generally credited to Carl Wiley, a U.S. engineer who was then working at the Goodyear facility in Arizona. SAR involves the collection of a set of microwave pulses transmitted and received, from reflection off Earth’s surface, from an aircraft or a spacecraft as it moves along a flight path. This collection of received radar echoes forms a phase-history data set, which can be processed by a digital computer into an image of the portion of the ground that is illuminated by the craft’s radar antenna beam. SAR filled at least two gaps that were inherent in the capabilities of conventional optical imagers: namely, operation at night and operation in all-weather conditions. With the advent of this type of radar imaging, the various vehicles on a battlefield (at least the stationary ones) could be located by an aircraft carrying a SAR at any time of day or night, and even in the presence of clouds, smoke, or dust.
The most significant limitation of the first SAR imagers was that of relatively poor spatial resolution. Early SARs achieved only several meters of resolution, which could allow an image analyst to delineate large features on Earth’s surface but would not, for example, allow one to distinguish a tank from an ambulance on the battlefield. Fifty years later, state-of-the-art SARs typically produce imagery with spatial resolution at a level of several inches, so that vehicle identification is now feasible. Several key developments during the past 20 years have led to this ability of SARs to achieve such a high level of spatial resolution. These include (1) high-accuracy electronic navigation systems, including the advent of the Global Positioning System (GPS), that allow an aircraft’s three-dimensional position to be known to a relative accuracy on the order of a wavelength (typically centimeters) across a flight path (synthetic aperture) that may be kilometers long; (2) electronics that allow signals of ultrahigh bandwidth (several gigahertz) to be synthesized and processed; (3) high-speed digital computing electronics that allow large amounts of raw radar pulse data to be formed via signal-processing techniques into digital images in real time or near-real time; and (4) the invention of a robust autofocus algorithm, which is a post-processing methodology that can remove the blurring artifacts in the formed SAR image that result from the small residual position errors left by the electronic navigation system.
Circa 1980, researchers in the SAR arena discovered that another aspect of SAR made it capable of performing certain tasks that were not achievable with optical or IR sensors. This discovery was founded in the fact that SARs are coherent imaging systems, whereas conventional electro-optical and infrared systems are not. As a coherent imager, a SAR transduces not only the amount of microwave energy reflected from Earth’s surface, but also the phase of the reflected energy at any given position in the image. The quantity “phase” encodes, among other things, information regarding the precise position of a radar reflector, e.g., a rock, a piece of dirt, or a blade of grass, that lies within one of the resolution cells (pixels) of the formed SAR image. As a result, any change in the position of a reflector
can be measured to within a fraction of a wavelength (e.g., several millimeters) by computing the phase difference as measured by a pair of SAR images taken of the same scene on Earth’s surface, but separated in time, typically by hours to days. Known as coherent change detection (CCD), this procedure has been a very effective tool for more than a decade now in detecting subtle surface changes, including vehicle tracks and even human footprints. A second related phase difference technique known as interferometric SAR terrain mapping (IFSAR), can produce digital terrain elevation maps of Earth’s surface that are accurate to within inches of elevation when measured on post spacings as small as 1 meter.
Current research in SAR involves attempts to have a computer automatically identify the vehicular targets within a formed SAR image. This set of techniques is commonly known as SAR automatic target recognition (ATR). In addition, serious research and development efforts are aimed at imaging and identifying moving vehicular targets, a capability generally referred to as moving-target indicators (MTIs). Finally, interest continues in SARs that can penetrate foliage (FOPEN SAR) and SARs whose radar transmitter and receiver are not collocated; that is, they are on different platforms (bistatic SAR). SARs will undoubtedly continue to be adapted to meet larger and larger shares of the imaging needs of both military and civilian efforts (Jakowatz et al., 1996).1
Orbital sensors are an important part of the U.S. arsenal. Assets range from low-Earth-orbit to geosynchronous sensing systems, with most of the sensing complement in the low-Earth-orbit category. A wide variety of primarily electromagnetic sensors—from the infrared to the ultraviolet—are deployed, and the U.S. military is highly dependent on their capabilities. Increasingly, commercial enterprises are providing similar capabilities, at least in the visible wavelength range, and anyone can now anonymously order a satellite photograph of his or her—or your—neighborhood at very modest cost. Here as elsewhere, it will be important to continuously assess a potential enemy’s access and sophistication and not assume that the military capabilities are far superior, as they have been for so long. Because of the commercial uses, access to space resources is no longer a state monopoly.
It is also important to consider what kinds of images an adversary might buy on the commercial market—for a modest price, for example, submeter-resolution images can be purchased over the Internet—and what their value might be for a subpeer state or even a nonstate actor such as a terrorist organization. High-resolution imagery as the domain of only peer adversaries is now a thing of the past, and the impact of this change should be carefully evaluated.
As unmanned vehicles have become more capable and affordable, reliance on them for sensing has increased. The obvious advantage is that no human is in harm’s way and the UAVs, because they are small, fast, maneuverable, and low-observable, provide a major advantage. This capability puts a premium on small, low-power-consumption sensor technologies. Just as in the case of satellite-based sensing systems, these sensors have taken good advantage of the continuing shrinkage of integrated circuit size, power consumption, and weight, as well as advances in micro- and nanotechnologies that are increasingly putting more power into smaller systems. A clear issue for the long term is that the
United States is no longer the dominant research community in these areas. Both the Pacific Rim and Europe are devoting significant resources to research in robotics and in micro- and nanotechnologies.2
Geolocation Sensor Systems
Global Positioning System. One form of sensing that has become increasingly important is geolocation. The advent of the Global Positioning System (GPS) has made it possible to accurately register the positions of equipment and personnel to tactical imagery or available maps. This capability makes it easier to guide BLUE forces in complex and fluid tactical situations. As reliance on this technology grows, however, the potential for disruption through GPS jamming becomes an increasingly worrisome possibility.
In its current incarnation, the GPS signals broadcast from satellites are relatively weak (each satellite radiates 500 watts). This means that it is possible to jam GPS signals with low-power transmitters. The low power of GPS means that it is not available in urban canyons, inside buildings, or under a jungle canopy. With urban warfare becoming increasingly important, alternatives to augment the capability provided by GPS should be developed. Some solutions such as GPS pseudolites are possible and could be deployed on aircraft or aerostats or even on rooftops if controlled by BLUE forces.
Another issue is that this capability is now widely available commercially. How much does this fact detract from the U.S. battlefield advantage? Why have Europe (France in particular) and China partnered to provide an alternative to GPS? If the operators refuse to deny availability of Galileo signals to an adversary, what options does the United States have?
Inertial Navigation. Microelectromechanical systems (MEMS)-based accelerometers make possible short-term inertial navigation that could be used to fill the gaps in information when GPS is not available.
Commercial GPS Navigation. Commercial GPS-like navigation systems, which use television signals and are intended for urban environments, will be available in the near term. Some of these systems will offer a significant increase in bandwidth capability compared with traditional GPS systems, stronger synchronization codes, and much lower operating frequencies that would allow for penetration of urban dwellings.3
Ultrawide Band Transduction. Recent work on using ultrawide band (UWB) transducers for precise location has appeared in the open literature. The approach involves emplacement of low-power UWB sources in, say, a building and then using time-of-flight computation to determine precise location.
Tidal Forces. There are currently many solutions to tracking objects that move at high speed, such as fixed-wing aircraft, through the use of GPS and/or inertial measurement units (IMUs). Unfortunately, tracking the movement of slower objects remains a problem in situations where GPS is not available.
Recent work has focused on the use of variations in the local gravitational field. Simulations have been developed using closed-form solutions that analyze these geophysical signals, add appropriate
See, for example, http://www.nano.gov/html/res/IntStratDevRoco.htm. Last accessed on April 8, 2005.
See, for example, http://www.rosum.com/rosum_tv-gps_indoor_location_technology.html. Last accessed on February 11, 2005.
noise levels, and then compute a location based on an iterative technique. These simulations are being used to determine the boundary conditions under which geolocation can be performed and to ascertain the types of sensing and signal-processing technologies required to implement a fieldable microsystem (Novak et al., 2005).
Networked Point Sensors
There is increasing interest in the emergent properties of networked point sensors—extending from large systems (e.g., satellite antenna arrays) to small expendable self-contained microsystems or “smart dust”—as an approach to sensing with improved performance and often with enhanced robustness. Commercial products such as SmartMeshTM, a low-power wireless mesh sensor network, are already in use.4 The vulnerabilities of these systems vary considerably, but clearly the network management aspects are very difficult and are inherently subject to intercept, jamming, and deception strategies.
Since most electronics manufacturing occurs outside the continental United States, it is likely that the United States will face adversaries with significant sensor networking capabilities soon after the technologies are developed. There are no esoteric materials or techniques involved, and, once committed to silicon, the sensors can be replicated inexpensively in the millions. Since software engineering expertise is not confined to the United States, once the algorithms are developed and published in the open literature it can be expected that they will be implemented both in commercial products and by the military establishments of U.S. adversaries.
The terahertz frequency range (once called the submillimeter spectral region) has seen a recent increase in research activity largely because of the potential ability of terahertz sensors to “see through walls and under clothes.” (Such a capability raises many new issues such as those related to privacy in a law enforcement context.) The largest issue is the lack of a suitable set of sources of sufficient power and detectors of sufficient sensitivity. At present much of this work is in the research stage and is available primarily to state actors. Understanding of the interaction of terahertz waves with materials, for example, spectroscopy of complex biomolecules, is at a very primitive stage, with interesting and possibly important suggestions of an enhanced capability for discrimination of signals, but without either a strong theoretical or an extensive experimental foundation.
The infrared is a very information-rich spectral region. The rotational-vibrational spectra of all molecular species lie in the infrared. Further, the peak of room-temperature blackbody radiation is at about 10 micrometers in the IR so that all of these species radiatively exchange energy with the environment. Monitoring this radiation provides a critical sensing modality. Night-vision imaging sensors are sensitive over a range of near-IR wavelengths and enable troops to operate in nighttime and dark environments. These detectors operate by sensing differences in temperature/emissivity from
See, for example, http://www.dust-inc.com. Last accessed on February 10, 2005.
different surfaces—warm people and machines versus cold plants, and so on. Although the advanced sensors of the U.S. military have led to the concept that “the U.S. owns the night,” the danger is that these sensors are entering the commercial sector for a variety of legitimate applications and are thus becoming widely available to friend and foe alike. It will be important to monitor the availability of night-vision equipment to likely adversaries, both state and nonstate, so as to avoid relying on a technological superiority that no longer exists.
The visible spectrum is the range of the greatest atmospheric transmission and is also the region of the spectrum for which sensing equipment is the most advanced and capable. Myriad sensor systems cover this range, and much work has been done in signal processing to extract information from optical images. As sensing equipment is increasingly silicon-dominated and networked processing-intensive, one of the issues to be concerned with is the trustworthiness of the software and hardware, especially as the sources of both move offshore, often to less politically reliable areas.
Acoustic and Seismic Sensors
Acoustic and seismic sensors have proven useful in a variety of military applications. These sensors can serve as simple threshold detectors—that is, to detect signatures that exceed specified limits—or they can be utilized to identify classes of targets. The acoustic signatures of military targets are fairly diverse, and good performance in identifying them is achievable for time-critical targets. Acoustic arrays can provide a bearing to target. Even with a limited number of sensors, properly deployed seismic arrays can provide information on direction of travel. The primary role of these sensors is as a force multiplier. A major challenge is efficiently and effectively analyzing the vast amounts of data generated by even a moderate number of sensors.5,6
Increasing use is being made of spectral imaging of a spatially resolved target. Multispectral imaging and hyperspectral imaging, for example, are unquestionably powerful techniques, but they produce vast quantities of data. One of the major challenges is to synthesize/compile all of the data obtained to provide actionable information. Another is to decide how much of the data to transmit over available communication lines. The increasing emphasis on local data-processing operations to reduce the load on the communications links can be either beneficial or harmful to the ability to maintain information dominance, depending on the situation. The optimal system should be automatically reconfigurable in response to external factors that can change rapidly. This major challenge brings up all of the issues of surety of hardware and software and network reliability discussed above for other sensor modalities.
Chemical sensors can be classified as remote and point sensors. Remote chemical sensors are based largely on spectroscopy, with transitions extending from the near-ultraviolet to the terahertz region. The infrared spectral region is the most information rich for most molecular species. For many detection schemes, false-alarm discrimination is a major issue. The battlefield in particular is a chemical-rich environment, and it is important to be able to distinguish between chemical agents and other species. Nerve agents, for example, are very large molecules with complex rotational-vibrational spectra with many overlapping bonds giving a broad resonance. The phosphorus-oxygen stretch at 10 micrometers is an important signature of many nerve agents. Ammonia, which is commonly found in the battlefield environment, is a small molecule with sharply defined rotation-vibration lines. A high-spectral-resolution, broadly tunable sensor is needed to achieve the necessary discrimination. Many point sensors operate by changing conductivity in the presence of chemical agents. Because many things can cause conductivity to change, adequate discrimination and false alarms are major issues, and it is necessary to have good controls and redundancy to ensure reliable readings.
Sensing of ionizing radiation is a very mature activity, reaching back more than one-half century. Unstable nuclei go through a process via beta, alpha, positron, and electron capture or fission decay to attain a stable nucleus. The half-lives for these processes differ by many orders of magnitude, as do the levels of radiation emitted during these processes. These energy emissions generally interact with surrounding media and therefore are attenuated as they travel from the emitting source to the detector system. Thus, radiation detection, although possible at large distances in some cases, is normally described as point detection. Often the decay particle can be measured directly, but often it is convenient to detect and quantify the gamma rays emitted during the processes. Most radiation detection systems used for search and other fieldwork rely on the detection of gamma rays and neutrons. Most radionuclides can be identified via detection of their unique gamma-ray signatures (Knoll, 1979).
The ability to identify isotopes based on gamma-ray spectra is greatly facilitated by using detectors with high energy resolution to unambiguously identify the isotope by characteristic emitted energies, which show up as lines at specific energies in an energy spectrum. However, in practice there are tradeoffs. Currently, and for the foreseeable future, the gamma-ray detectors with the best resolution are also the most expensive, and they exhibit other characteristics that constrain their use in many field applications. The larger and cheaper detectors exhibit poorer energy resolution. Under some conditions, especially where measurements can be taken for long periods of time (several minutes to hours) and/or in close proximity (approximately a meter or less, depending on source strength and detector size), miniaturized detectors are required or have a niche. This is because source strength and source shielding often dictate that less than 1 million gammas per second in 4-pi geometry are available for measurement, and sometimes one to two orders of magnitude less. For small detectors with a frontal area on the order of 1 to 10 cm2, this means that at distances greater than 1 meter, 10 or fewer gamma rays may intersect the detector in a second of measurement time.
In many real-world applications, such as at traffic choke points or other points of entry into the United States, measurements must be made in only a few seconds in order not to disrupt commerce unduly. Thus, during such a measurement with small detectors, often the total number of gamma rays striking the detector may be a few hundred, or less. No matter what the detector type (material), for detectors of a reasonable thickness, the probability of interaction will be less than unity. In addition,
many gamma rays will not deposit their full energy within the detector volume, but will scatter and escape the detection volume, so that the photon energies and some other energies will have been detected at less than the full energy peak. Although these scattered gamma rays are useful, they are not specific to the isotope being measured, but present a rather broad, featureless continuum. Thus, in order to determine the presence of a specific isotope using its specific full-energy lines, several hundred, or more, gamma rays must interact with the detector, in addition to those interacting based on radioactive isotopes always present in the environment (background).
The shielding and scattering effects are also energy and material dependent, making the resulting spectrum somewhat complex. Thus, in many applications, larger, rather than smaller, detector volumes are required because of the physics, and there is nothing that can be done to totally overcome this limitation through the use of different detector types or materials. In practice plastic scintillators are the largest and cheapest of the usable gamma spectrometers. However, the energy resolution is so poor in these low-atomic-number organics as to make reasonable isotope identification through normal means almost impossible—i.e., their use as a spectrometer is quite limited.
The next general class of detectors in use is inorganic scintillators, such as NaI and CsI detectors. These detectors are more expensive volume-wise than plastic and are usually smaller. However, their energy resolution and detection sensitivity are significantly better than those of plastic, and they can usually be provided with volume/area such that they are the most typical detector used for fielded systems providing isotope identification.
Another category of detector is based on a room-temperature semiconductor, such as cadmium-zinc-telluride (CZT), and has an energy resolution typically three- to fourfold better than that of an NaI detector. This technology is still under development, and spectral-grade detectors are still difficult to obtain, are quite expensive for the volume obtained, and are variable in detection characteristics from detector to detector (quality control). Their main drawback, however, is the small volumes obtainable—on the order of a few cubic centimeters, at best.
A great deal of effort has been expended in the last decade to grow bigger, better crystals, and to use other techniques, such as combining several detector elements into arrays and providing pulse-shape discrimination, to get better resolution or larger effective sizes. But such technologies are still inadequate for most normal field applications. High-purity germanium semiconductor detectors have the best energy resolution available for fieldable gamma-ray detection. Their energy resolution is better than CZT by a factor of 20 to 30 and better than NaI by a factor of 50 to 100. However, they must be cryogenically cooled to liquid-nitrogen temperatures, thus requiring large dewars or more electrical power for thermo-mechanical cooling than can typically be supplied by a battery-operated system. In addition, the largest sizes are about 1 liter in volume—much larger than CZT, but still much smaller than that available from NaI or plastic. In addition, they are 10 times as expensive as CZT and 20 to 50 times as expensive as NaI. In all cases, costs vary as a function of volume and detector quality.
To overcome the limitations and difficulties associated with spectral unfolding from detectors of poorer energy resolution, software processing techniques have been under development. In most cases, these techniques, which can also be used with high-resolution systems, make it possible to provide at least a primary isotope detection and identification. Using these techniques and detectors it is possible to separate medical from industrial isotopes and special nuclear material and also to make some determination of the amount of shielding—and thus to determine whether a medical isotope of reasonable activity and little shielding, such as an isotope isolated in the human body, is being used in a legitimate application, or if a similar source, but orders of magnitude more intense, is being hidden within heavy shielding (one case of a radiological dispersal device). Such a determination is based on the spectral
shape of the measured spectrum, a result of the energy dependence of absorption/scattering on materials present as described above.
There are circumstances in which the effects of shielding and the requirement to determine specific isotopes of an elemental source would require the use of a high-purity germanium detector. Some low-level emitters, such as enriched uranium, can be detected passively through other, normally associated isotopes, such as U-238 or U-232. The gamma rays emitted specifically from U-235 are low in energy and easily shielded, or are so weak in intensity as to be difficult to detect under most field constraints. Thus, enriched uranium is often detected passively through detection and identification of gamma rays from associated isotopes, if present. Otherwise, active interrogation methods can be employed, typically using neutrons or high-energy gamma rays as the interrogating medium, that cause the U-235 in the sample to fission and thus emit neutrons and other measurable gamma rays. However, such interrogating systems tend to be rather large and expensive and present radiation safety issues. In short, several options are currently available for making radioisotopic measurements. The specific system used must be optimized for the specific situation and associated concept of operations. Ongoing research and development aims to yield new materials and techniques, which will provide some relief in some scenarios. This development should be encouraged and funded. However, there is much that can be done by designing appropriate systems with detectors currently available.7,8
Jakowatz, Jr., Charles V. et al. 1996. Spotlight-Mode Synthetic Aperture Radar: A Signal Processing Approach. Kluwer Academic Publishers, Boston.
Knoll, Glenn F. 1979. Radiation Detection and Measurement. Wiley, New York.
NIST (National Institute of Standards and Technology). 2000. Report on the Development of the Advanced Encryption Standard (AES), U.S. Department of Commerce. Available online at http://www.linuxsecurity.com/resource_files/cryptography/r2report.pdf. Last accessed on February 11, 2005.
Novak, Jim L., Michael R. Daily, and Steven B. Rohde. 2005. Geophysical Geolocation System. Sandia National Laboratories, Albuquerque, N.Mex.
NRC (National Research Council). 1991. Computers at Risk: Safe Computing in the Information Age. National Academy Press, Washington, D.C.
NRC. 1996a. Cryptography’s Role in Securing the Information Society. National Academy Press, Washington, D.C.
NRC. 1996b. Continued Review of the Tax Systems Modernization of the Internal Revenue Service: Final Report. National Academy Press, Washington, D.C.
NRC. 1997. For the Record: Protecting Electronic Health Information. National Academy Press, Washington, D.C.
NRC. 1998. Trust in Cyberspace. National Academy Press, Washington, D.C.
NRC. 1999. Realizing the Potential of C4I: Fundamental Challenges. National Academy Press, Washington, D.C.
NRC. 2001. Embedded, Everywhere: A Research Agenda for Networked Systems of Embedded Computers. National Academy Press, Washington, D.C.
NRC. 2002. Cybersecurity Today and Tomorrow: Pay Now or Pay Later. The National Academies Press, Washington, D.C.