4
Emerging Technologies with Potentially Significant Impacts

INTRODUCTION

The ultimate performance of a particular detector system is dependent on the integration of the various component technologies. Chapter 3 discusses the current and anticipated 10-15 year status of the various detector component technologies together with their likely impact on overall system performance. In contrast, this chapter focuses on technology breakthroughs that are more speculative in nature but, if achieved, could represent “game changing” improvements in system-level detector performance. Technologies enabling (1) advanced detection, (2) innovative optics, (3) improved coolers, and (4) enhanced signal processing are discussed in detail.

ADVANCED DETECTION TECHNOLOGIES

Epitaxial Growth Approaches

Epitaxial growth techniques are used to produce the active material in most long-wavelength infrared (LWIR) photon detectors. The detector material generally consists of two or more thin layers grown in succession on a substrate. Epitaxial growth implies that the crystal structure of the layers is aligned with that of the substrate, a necessary requirement for good material quality and appropriate electrical characteristics. The two most common families of epitaxial materials for LWIR applications are mercury cadmium telluride (MCT) and antimonide-based III-Vs.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 91
4 Emerging Technologies with Potentially Significant Impacts INTRODUCTION The ultimate performance of a particular detector system is dependent on the integration of the various component technologies. Chapter 3 discusses the current and anticipated 10-15 year status of the various detector component technologies together with their likely impact on overall system performance. In contrast, this chapter focuses on technology breakthroughs that are more speculative in nature but, if achieved, could represent “game changing” improvements in system-level detector performance. Technologies enabling (1) advanced detection, (2) innova- tive optics, (3) improved coolers, and (4) enhanced signal processing are discussed in detail. ADVANCED DETECTION TECHNOLOGIES Epitaxial Growth Approaches Epitaxial growth techniques are used to produce the active material in most long-wavelength infrared (LWIR) photon detectors. The detector material gener- ally consists of two or more thin layers grown in succession on a substrate. Epi- taxial growth implies that the crystal structure of the layers is aligned with that of the substrate, a necessary requirement for good material quality and appropriate electrical characteristics. The two most common families of epitaxial materials for LWIR applications are mercury cadmium telluride (MCT) and antimonide-based III-Vs. 

OCR for page 91
seeing Photons  For MCT, the technique of liquid-phase epitaxy (LPE) was demonstrated in the early 1980s and has matured to become a workhorse of the industry. Elements to form the layers are first dissolved in a melt of mercury or tellurium. The substrate is immersed in the melt and the temperature is ramped down, causing the elements to crystallize and form a layer. A second melt is used to form a second layer. N-type and p-type dopants, respectively, are included in the melts, so that the interface between layers becomes a p-n junction. The substrate for nearly all LPE growth is cadmium zinc telluride (CZT), which is chemically and physically compatible with MCT and is transparent in the IR. Advantages of LPE are that (1) it occurs close to thermodynamic equilibrium, typically near 500°C, causing it to be relatively forgiving of defects; (2) dopants can be incorporated in a very controllable manner; and (3) excellent material quality is routinely achieved. The disadvantages are that detector structures requiring more than two layers are impractical, and it is not possible to maintain sharp interfaces between layers because of interdiffusion during growth. Also, LPE growth cannot be performed on alternative substrates such as silicon. Molecular beam epitaxy (MBE) has become the preferred growth method for more advanced MCT device structures, such as two-color arrays for third-genera- tion sensors, as well as avalanche photodiodes. It also enables the growth of MCT on silicon and GaAs substrates that are larger and cheaper than CZT. MBE growth is performed in an ultrahigh-vacuum chamber with the elements being emitted by hot effusion cells and depositing on the substrate, which is held at about 200°C. Sharp interfaces can be formed because the molecular beams can be turned on and off abruptly and because interdiffusion is negligible at the low growth temperature. In most cases the substrate is CZT, silicon, or GaAs. MBE is more challenging than LPE because it is less tolerant of growth defects, and it requires very tight control of the substrate temperature and the beam pres- sures of the species arriving at the substrate. MBE equipment is more expensive to acquire and maintain than that of LPE. However, MBE technology has matured to the point that multilayer epitaxial structures, in which the MCT alloy composition and the doping are controllably changed several times during the growth run, are produced on a regular basis. This has enabled complex device structures in MCT that would have otherwise not been possible. Also, the ability to grow MCT on silicon has enabled the fabrication of very large focal plane arrays (FPAs). There is a large lattice mismatch between HgCdTe and both GaAs and silicon, which has severe implications for the growth process, in particular the formation of disloca- tions and other growth defects. Significant progress has been made in learning how to accommodate the lattice mismatch, particularly in the HgCdTe:GaAs system as discussed in Chapter 3. The availability of larger and cheaper substrates for epitaxial growth will have a major impact on the performance and cost of future HgCdTe FPAs.

OCR for page 91
emerging technologies Potentially significant imPacts  with The antimonide-based III-V materials, including strain-layer superlattice (SLS), quantum-well infrared photodetector (QWIP), and quantum-dot infrared photodetector (QDIP) structures, are grown by MBE in most cases. The technique is similar to that for MCT, except that the substrate temperature is higher (about 400 to 500°C). MBE technology for III-V materials is relatively mature, but some additional development has been required to control the composition of the atomic monolayers at the interfaces between the indium arsenide and the gallium antimo- nide components of the SLS in order to control the strain. Further improvements in the MBE technique are needed to minimize the populations of point defects that limit the carrier lifetimes in SLS material. The large experience base in the growth and design of electronic and photonic devices using bandgap engineering—the incorporation of multiple functional layers into device structures—opens new avenues for optimizing device performance. Examples include barrier layers to reduce dark current and amplification layers to extend the concepts of avalanche gain and Geiger mode detection further into the IR. Most of these efforts are at an early stage of research development and are likely to bear significant fruit within the next 15 years. This is an area to pay attention to for further improvements in infrared detection. Nanophotonics Over the recent past, the global scientific community gradually developed technologies that could structure materials on a nanometer scale and the field of nanotechnology was developed. Arguably, nanotechnology has its roots in the challenge of Professor Richard Feynman in 1959 to build the world’s smallest motor.1 In 1996, a federal interagency working group was formed to consider the creation of a National Nanotechnology Initiative (NNI) to focus U.S. research and development (R&D) efforts,2 and in 2000 the NNI became a formal government program. In 2003, the 21st Century Nanotechnology Research and Development Act (NRDA) gave the NNI the legislative backing needed to establish a management structure and funding.3 The National Science Foundation has played a key role, leading and coordinating the various agencies involved, including the Department of Defense, the Department of Energy, the Environmental Protection Agency, the National Institutes of Heath, the Department of Commerce, the U.S. Department of Agriculture, the National Aeronautics and Space Administration, and many 1 R.P. Feynman. 2002. The Pleasure of Finding Things Out and the Meaning of It All. New York: Perseus. 2 National Research Council. 2002. Small Wonders, Endless Frontiers: A Review of the National Nano- technology Initiative. Washington, D.C.: The National Academies Press. 3 National Research Council. 2006. A Matter of Size: Triennial Review of the National Nanotechnology Initiative. Washington, D.C.: The National Academies Press.

OCR for page 91
seeing Photons  others. In the past several years, the annual funding for the NNI has been at about $1.5 billion.4 Table 4-1 shows the distribution among the various funding sources within the U.S. government. A similar historical evolution has occurred in many foreign countries.5 The intent in this chapter is not to provide exhaustive historical detail; in short, Japan, Europe, and other Asian countries have kept pace with the United States with comparable government expenditures and even more significant commercial fund- ing. Much of the work up to the past half dozen years has been oriented toward fundamental materials, establishing methods for creating nanometer structures and measuring properties. More recently, efforts have matured for building en- tirely new device structures using these material fabrication techniques. Quantum dots, nanotubes, and layered carbon graphene structures can be cited as examples of specific materials from which entirely new devices and applications will arise. Countries around the world are poised to take advantage of nanotechnology to potentially build entirely new sensors and sensor systems. Therefore, foreign progress in the nanotechnology field constitutes a principal driver for significant advances in sensors. Of the many nanoscale material systems being explored, simple carbon struc- tures are undoubtedly the most studied; perhaps these carbon structures will be, in fact, the “silicon” for nanotechnology. With several different morphologies (graphene two-dimensional [2-D] or three-dimensional [3-D] layered structures, spherical fullerene [“Bucky Balls”], and single- and multiwall nanotubes [CNT] of different chiralities) and outstanding physical properties (>30× the strength of steel and approaching the conductivity of copper), carbon is being studied for a host of applications. All of the III-V and II-VI compound classes are also extensively being explored for basic science attributes and applications; hence, techniques have emerged for reproducible quality device prototypes using the full power of modern electronic material and chip fabrication methods. It would distract us in this short introduction to list the many other types of materials such as biology- based building blocks that might have some bearing on new techniques for sensors; instead it suits our purpose here to focus on the most significant potential that nanomaterials can provide to advance the sensor state of the art. The impact of nanotechnology on future designs of sensors and sensor systems can be anticipated along the following lines. Graphene is a material that has recently been a subject of intense study for its potential application in high-frequency electronics and photonics. Graphene consists of a monolayer of carbon atoms arranged in a 2-D hexagonal lattice. 4Available at www.nano.gov/NNI_FY09_budget_summary.pdf. Accessed March 25, 2010. 5Woodhouse, E.J., ed. 2004. Special Issue on Nanotechnology. Prepared as a Publication of the IEEE Society on Social Implications of Technology. IEEE Technology and Society Magazine 23(4).

OCR for page 91
emerging technologies Potentially significant imPacts  with TABLE 4-1 Funding of Nanophotonics by Federal Agency 2009 Recoverya 2009 Actual 2010 Estimated 2010 Proposed DOEb 332.6 293.2 372.9 423.9 NSF 408.6 101.2 417.7 401.3 HHS-NIH 342.8 73.4 360.6 382.4 DODc 459.0 0.0 436.4 348.5 DOC-NIST 93.4 43.4 114.4 108.0 EPA 11.6 0.0 17.7 20.0 HHS-NIOSH 6.7 0.0 9.5 16.5 NASA 13.7 0.0 13.7 15.8 HHS-FDA 6.5 0.0 7.3 15.0 DHS 9.1 0.0 11.7 11.7 USDA-NIFA 9.9 0.0 10.4 8.9 USDA-FS 5.4 0.0 5.4 5.4 CPSC 0.2 0.0 0.2 2.2 DOT-FHWA 0.9 0.0 3.2 2.0 DOJ 1.2 0.0 0.0 0.0 TOTALd 1,701.5 511.3 1,781.1 1,761.6 NOTE: CPSC = Consumer Product Safety Commission; DHS = Department of Homeland Security; DOC = Deparment of Commerce; DOD = Department of Defense; DOE = Department of Energy; DOJ = Department of Justice; DOT = Department of Transportation; EPA = Environmental Protection Agency; FDA = Food and Drug Administration; FHWA = Federal Highway Administration; HHS = Department of Health and Human Services; NASA = National Aeronautics and Space Administraion; NIFA = National Institute of Food and Agriculture; NIH = National Institutes of Health; NIOSH = National Institute for Occupational Safety and Health; NIST = National Institute of Standards and Technology; NSF = National Science Foundation; USDA = Department of Agriculture. aBased on allocations of the American Recovery and Reinvestment Act (ARRA) of 2009 (P.L. 111-5) appropriations. Agencies may report additional ARRA funding for small business innovative research (SBIR) and small business technology transfer (STTR) projects later, when 2009 SBIR-STTR data become available. bIncludes the Office of Science, the Office of Energy Efficiency and Renewable Energy, the Office of Fossil Energy, the Office of Nuclear Energy, and the Advanced Research Projects Agency-Energy. cThe 2009 and 2010 DOD figures include congressionally directed funding that is outside the NNI plan ($117 million for 2009). dTotals may not add, due to rounding. SOURCE: Data from http://www.nano.gov/html/about/funding.html. Accessed May 2, 2010. Electronically, it behaves as a zero bandgap semiconductor with extraordinary carrier mobility, even at room temperature. Graphene has also demonstrated strong photocurrent responses near graphene-metal interfaces. The combination of graphene’s attractive electronic and photonic properties holds great promise for visible detector applications. In fact, recent results have demonstrated the use of graphene detectors in a 10 gigabit per second optical link with an external pho- toresponsivity of 6.1 mA/W at 1.55 µm wavelength.6 The same group also reports 6 T. Mueller, F. Xia, and P. Avouris. 2010. Graphene photodetectors for high-speed optical commu - nications. Nature Photonics 4:297-301.

OCR for page 91
seeing Photons  having demonstrated a strong photoresponse in a metal-graphene-metal (MGM) based photodetector at 514 nm, 633 nm, and 2.4 µm. Graphene’s high switching speed combined with a broadband photoresponse underscores its potential to have a disruptive impact on future detector performance. The promise of a material that may surpass the performance of silicon for many electronic applications has focused a significant body of research on graphene because the mechanisms of transport in this material are not fully understood. As these research efforts ma- ture over the next several years and new techniques for processing and fabricating graphene devices are developed, the true potential for graphene in electronic and photonic devices will become better clarified and quantified. Photonic Structures Nanostructures can be built through bottom-up self-assembly processes taking advantage of both organic and inorganic routes and top-down approaches apply- ing lithographic techniques. Integrated circuit scales are approaching transverse dimensions of ~10 nm scale,7 and an important trend is the merging of top-down processes, which offer long-range order and complex hierarchical structure, and bottom-up self-assembly, which offers nanometer-scale capabilities and below, with “directed self-assembly.”8 Since the first demonstration9 of a photonic crystal in 1989, detailed work has accelerated quickly and been extended from microwave to optical frequencies. Again, not to dwell on explanations that can be found in textbooks,10 the concepts use the precision attendant to nanostructure construc- tion to form periodic one-, two-, and three-dimensional subwavelength structures for controlling optical radiation. Analogous to electrons in semiconductors, light propagates through periodic structures with pass bands or stop bands depending on the wavelength. All of the well-known components familiar to microwave engi- neers can therefore be constructed for light—for example, wavelength pass-rejec- tion filters, resonators, isolators, circulators, and bends. Embedding absorbing or emitting optical elements in these structures permits tailoring of features such as spontaneous emission probability through a lower density of radiation states. In addition to potential advantages in designing more compact optical trains trans- 7 Data derived from the International Technology Roadmap for Semiconductors, available at http:// www.itrs.net. Accessed March 25, 2010. 8 J.A. Liddle, Y. Cui, and P. Alivasatos. 2004. Lithographically directed self-assembly of nanostruc - tures. Journal of Vacuum Science and Technology B 22(6):3409-3414. 9 E.Yablonovitch and T.J. Gmitter. 1989. Photonic band structure: the face-centered cubic case. Physics Review Letters 63:1950-1953. 10 J.D. Joannapoulos, S.G. Johnson, J.N. Winn, and R.D. Meade. 2008. Photonic Crystals: Molding the Flow of Light. Princeton, N.J.: Princeton University Press.

OCR for page 91
emerging technologies Potentially significant imPacts  with porting light from the collection aperture to the detector element, some tailoring of the thermal noise background is possible.11 These structures should not be confused with composite dielectrics, which are composed of two or more materials interspersed on a subwavelength scale with- out any consideration for ordering. An example is perfectly black carbon surfaces comprising “steel wool-like” features or a mixture of low- and high-index material to achieve a particular index of refraction. While subwavelength surface absorbing elements do imply the possibility of building sensors with subwavelength pixel size, diffraction effects limit the minimum pixel sizes independent of the length scale of the absorber as discussed in Chapter 2. Metamaterials are an emerging class of materials with wholly new properties such as a negative index of refrac- tion that offer additional possibilities for managing and directing optical paths in nonclassical ways.12 Additionally, plasmonics takes advantage of the very large (and negative) dielectric constant of metals, to compress the wavelength and enhance electromagnetic fields in the vicinity of metal conductors. This has been referred to as “ultraviolet wavelengths at optical frequencies”13 and is the basis of many well-studied phenomena such as surface-enhanced Raman scattering (SERS) and surface plasma wave chemical-biological sensors.14 Additional discussion of the application to infrared detectors is presented in the following section. Electronics The broad applicability of nanotechnology to electronics is obvious; for ex- ample, the use of cathodic electron field emission from an assemblage of nanotubes for high-power microwave transmitters15 and other vacuum electronic applications offers copious production of electrons; this particular technology may find im- mediate application in fielded systems. On an individual scale, single-wall carbon nanotubes (SWNTs) can be isolated with adequate properties16 to demonstrate transistor action for microelectronic circuits. Techniques for generating and ma- nipulating individual SWNTs have been perfected to the point that metal-metal, 11 S-Y. Lin, J.G. Fleming, E. Chow, J. Bur, K.K. Choi, and A. Goldberg. 2000. Enhancement and suppression of thermal emission by a three-dimensional photonic crystal. Physical Review B 62(4): R2243-R2246. 12W. Cai and V. Shalaev. 2010. Optical Metamaterials: Fundamentals and Applications. Berlin: Springer-Verlag. 13 M. Dragoman and D. Dragoman. 2008. Plasmonics: applications to nanoscale terahertz and opti - cal devices. Progress in Quantum Electronics 32:1-4. 14 J. Homola, S.S. Lee, and G. Gauglitz. 1999. Surface plasmon resonance sensors: review. Sensors and Actuators B: Chemical 54:3-15. 15 K.L. Averett, J.E. Van Nostrand, J.D. Albrecht, Y.S. Chen, and C.C. Yang. 2007. Epitaxial overgrowth of GaN nanocolumns. Journal of Vacuum Science and Technology B 25(3):964-968. 16 Sang N. Kim, Zhifeng Kuang, James G. Grote, Barry L. Farmer, and Rajesh R. Naik. 2008. Enrich - ment of (6,5) single wall carbon nanotubes using genomic DNA. Nano Letters 8(12):4415-4420.

OCR for page 91
seeing Photons  metal-semiconductor, and semiconductor-semiconductor junctions can be repro- ducibly formed and the I-V curves measured;17 however, scaling this to the densities and defect levels already reached for complementary metal oxide semiconductor (CMOS) applications remains an open question. It can be noted that this last ref- erence is from the Indian Institute of Science in Bangalore, illustrating the global sweep of this important technology. SWNTs have been assembled into electronic circuits in elementary “chips” with >20,000 elements,18 and field effect transistors (FETs) have been reproducibly constructed to build a 10 FET ring oscillator.19 Competitive electronic applications are many years behind the level of sophistica- tion needed to contemplate actual integration of a nanoscale readout integrated circuit (ROIC) into operating sensors. Still the ultimate payoff of a fully integrated sensor element with nanoscale processing requires ongoing monitoring of global improvement and activity. Sensor Elements Quantum sensor elements receive an incoming photon and free a bound electron(s) for amplification and signal processing. The nanomaterial necessarily must have well-defined optical and electronic properties. One such material is a quantum dot wherein the physical dimensions are reduced to the point that elec- tron states are no longer defined by an infinite crystal lattice; rather, the dot’s physi- cal dimension fixes the permissible energy bands, very much a man-made atomic system. Quantum-well structures also tailor bands, and QWIP sensor elements are discussed in another section of this report along with pixel-sized antennas to guide incoming radiation into the element. At this stage of nanotechnology detec- tor elements, QWIP and QDIP structures are the most studied, but entirely new configurations might be possible and literature should be appropriately scanned. Plasmonic Enhancement of Detectors The dielectric properties of metals are often described by a free-carrier Drüde model given by ω2 ε (ω ) = 1 − , ω (ω + iν ) 17 C.N.R. Rao, R. Voggu, and A Govindara. 2009. Selective generation ofsingle-walled carbon nano- tubes with metallic, semiconducting, and other unique electronic properties. Nanscale 1:96-105. 18 C.W. Zhou, J. Kong, E. Yenilmez, and H. J. Dai. 2000. Modulated chemical doping of individual carbon nanotubes. Science 290:1552-1555. 19 P. Avouris. 2009. Carbon nanotube electronics and photonics. Physics Today 62(1):34-40.

OCR for page 91
emerging technologies Potentially significant imPacts  with where 4π e 2 N ωp = κε 0m * is the plasma frequency in the metal, with e the electronic charge, N the carrier concentration, κ the relative dielectric constant arising from the bound electrons, and m* the electron effective mass. For single-electron per atom metals such as gold and silver, ωp is in the ultraviolet spectral region. Here ν is the electron colli- sion frequency that is typically in the terahertz regime. At radio frequencies, ω/ν << 1, and the metal response is large and imaginary (out of phase with the driving electric field). Throughout the infrared, ω/ν >> 1, and the metal response is large and negative with a smaller imaginary part. This is the plasmonic regime. At vis- ible frequencies, additional losses due to bound electron transitions become more important and the dielectric function is both lossier and not as negative. For most metals except gold, silver, and aluminum, the dielectric function is positive across the visible. Some of the implications of this dielectric function have been recognized for more than 100 years. Sommerfeld was the first to recognize the existence of a bound surface mode at the interface between a lossless dielectric and a lossy metal in his analysis of Marconi’s wireless transmission experiments (there the loss was associ- ated with currents in Earth’s surface).20,21,22 More recently, interest in plasmonics has been rekindled with the discovery of anomalous transmission through a metal slab perforated with a 2-D array of holes.23,24 This transmission is associated with resonances involving the coupling of the incident radiation to surface plasma waves (SPW) localized to a metal-dielectric interface (a thin metal film has two such SPWs one on either side of the film) and the localized resonances associated with the holes (or other unit-cell structures such as annuli25). 20A. Sommerfeld. 1909. Über die Ausbreitung der Wellen in der drahtlosen Telegraphie. Annalen der Physik 28:665-737. 21A. Baños. 1966. Dipole Radiation in the Presence of a Conducting Half-Space. Oxford: Pergammon Press. 22 S.R.J. Brueck. 2000. Radiation from a dipole embedded in a dielectric slab. IEEE Journal of Selected Topics in Quantum Electronics 6:899-910. 23 T.W. Ebbesen, H.J. Lezec, H.F. Ghaemi, T. Thio, and P.A. Wolff. 1998. Extraordinary optical trans - mission through sub-wavelength hole arrays. Nature 391:667-669. 24 For a more recent review, see C. Genetand T.W. Ebbesen. 2007. Light in tiny holes. Nature 445:39-46. 25W. Fan, S. Zhang, B. Minhas, K.J. Malloy, and S.R.J. Brueck. 2005. Enhanced infrared transmission through subwavelength coaxial metallic arrays. Physics Review Letters 94:033902.

OCR for page 91
seeing Photons 00 The implications for detectors were recognized quite some time ago26 and re- discovered soon after the discovery of the anomalous transmission.27 In addition to the distributed SPW coupling, work has also been reported using a shaped plas- monic lens structure to funnel all of the incident radiation to a single small detector at the center of a bull’s-eye pattern.28 This particular experiment was for a SWIR detector for integration with silicon integrated circuits, and the goal was reduced capacitance for higher-speed operation. For the infrared, this approach is aimed mainly at reducing the detector volume and, consequently, thermal noise sources such as generation-recombination dark current in high-operating-temperature MWIR detectors.29 The difficulty is finding the appropriate combination of SPW coupling, hole transmission, and angular and spectral bandwidth while still retain- ing the ability to collect the photo- or plasmon-generated carriers. Very recently, a 30× enhancement in detectivity was obtained for a SPW coupled QDIP detector30 by coupling using a similar transmission metal grat- ing. This is very early work, still in the exploratory research stage, and far from ready for integration into commercial focal planes, but it does offer the potential of a multispectral, MWIR-LWIR focal plane array with high quantum efficiency, polarization, and spectral selectivity and some degree of electrical tuning based on the quantum-confined Stark effect in QDIP detectors. Ultimately, based on previous results on coupling to SPWs, quantum efficiencies near unity should be possible. These arrays could be a new direction in IR FPAs; however, the road to a fielded product is long and success is not guaranteed. The difficulty is finding the appropriate combination of SPW coupling, hole transmission, and angular and spectral bandwidth while still retaining the ability to collect the photo- or plasmon-generated carriers. FINDING 4-1 An emerging trend in focal plane array technologies is multispectral band sens- ing enabling enhanced system capability through a single aperture. Spectral information is an added discriminant for enhanced detection selectivity and material identification. 26 S.R.J. Brueck, V. Diadiuk, T. Jones, and W. Lenth. 1985. Enhanced quantum efficiency inter- nal photoemission detectors by grating coupling to surface plasma waves. Applied Physics Letters 46:915-917. 27 Z. Yu, G. Veronis, S. Fan, and M.L. Brongersma, Design of mid-infrared photodetectors enhanced by surface plasmons on grating structures. Applied Physics Letters 89, 151116 (2006). 28 T. Ishi, J. Fujikata, K. Makita, T. Baba, and K. Ohashi. 2005. Si nano-photodiode with a surface plasmon antenna. Japanese Journal of Applied Physics 44: L364-L366. 29 R.D.R. Bhat, N.C. Panoiu, S.R.J. Brueck, and R.M. Osgood. 2008. Enhancing the signal-to-noise ratio of an infrared photodetector with a circular metal grating. Optics Express 16(7):4588-4596. 30 S.C. Lee, S. Krishna, and S.R.J. Brueck. 2009. Quantum dot infrared photodetector enhanced by surface plasma wave excitation. Optics Express 17(25):23160-23168.

OCR for page 91
emerging technologies Potentially significant imPacts 0 with FINDING 4-2 By manipulating fields at the subwavelength scale, nanophotonics offers a po- tential for enhanced detector functionality, particularly in adding wavelength and/or polarization selectivity along with enhanced detectivity at the pixel scale. Antennas An antenna is a transduction device that is used to transmit or receive elec- tromagnetic radiation. Traditionally, antennas have been associated with transmit or receive applications in the radio-frequency (RF) spectrum. Examples of RF antennas can be found in many aspects of everyday life since they are a ubiqui- tous component in television, radio, voice, data, radar, and other communication networks. However, within the past decade, the potential for developing antennas that work in the visible and IR spectra has been explored with increasing interest. In this section, recent research in IR and optical antennas is reviewed and assessed in terms of relevance to visible and IR detector technologies. Antennas are composed of various conducting elements arranged in a pattern designed for optimum performance for a given application. Typical performance parameters cited are gain, bandwidth, and efficiency and these are all functionally related to both the size of the antenna elements and the conducting (electrical and magnetic) properties of the materials that comprise the antenna. While the basic equations that govern antenna performance in the RF spectrum have analogues in the IR and optical spectra, the physical realization of elements that constitute a functional antenna are vastly different. The two major reasons for these differences include the fact that antenna size scales with wavelength and that material losses increase significantly in conductors with increased frequency. The scaling issue re- quires that antennas working in the IR and optical spectra be typically on the order of microns or smaller, and the loss issue presents significant challenges in designing antennas that achieve reasonable efficiencies at IR and/or optical wavelengths. Despite the challenges in developing antennas for optical and/or IR applica- tions, there has been a significant body of research focused on exploring this topic. The primary drivers for this research are the need for achieving large absorption cross section together with high field localization and/or enhancement for applica- tions in nanoscale imaging and spectroscopy, solar energy conversion, and coher- ent control of light emission and/or absorption.31 It is difficult to speculate on the 31An overview of the current research on optical antennas as well as a discussion of their potential applications can be found in Palash Bharadwaj, Bradley Deutsch, and Lukas Novotny. 2009. Optical antennas. Advances in Optics and Photonics 1(3):438-483 along with the article’s associated references. Additional useful summaries of research that is representative of the fundamental nature of current optical and IR antenna research are found in P. Mühlschlegel, H.-J. Eisler, O.J.F. Martin, B.Hecht, and D.W. Pohl. 2005. Resonant optical antennas. Science 308(5728):1607-1609; J. Greffet. 2005. Applied

OCR for page 91
emerging technologies Potentially significant imPacts  with compression efficiency. The last step of encoding is aimed at achieving rates that approach the entropy of the quantized values or symbols.83 Emerging Compression Techniques Compressive sensing or sampling is an emerging technology that is surfacing in various implementations ranging from a direct application of image compression to new sensors that embed the concept of compressed sensing at the analog-optical layer.84 Compressive sensing or sampling relies on two basic principles, sparsity and incoherence. The basic assertion is that many signals are sparse, meaning that they have a very compact representation if the basis functions are chosen properly. Incoherence is an extended case of the duality concept between time and frequency domains, sparse in time and spread in frequency. If the signal has a compact representation in some basis, the sampling- sensing waveforms have a very dense representation. The implication is that we can design efficient sampling that captures the useful information embedded in the signal and transforms it into a small amount of data. Large upfront data reduction directly translates to a reduced requirement for communication bandwidth. The simplicity of the sampling stage is obtained at the cost of a complex reconstruction stage that requires the application of computing-intensive linear programming techniques. The same concepts are also being explored in the design of a new generation of imagers that employ compressive sensing at the optical-analog layer. In this new concept of sensing a smaller number of measurements is acquired, and each measurement corresponds to a quantity in the transformed space that is optically accomplished as the inner product of the scene with the basis functions. Because of the significant reduction of sensor components and elements (without sacrifice in performance), these new imagers likely will be deployed on many more platforms than currently feasible. 83 Entropy is considered a standard measure of complexity. It is a property of a distribution over a discrete set of symbols. For a sequence {i} of symbols x drawn from an alphabet with a probability p(i), the entropy H(i) of the random variable I is given by H(i) = S p(i) log2 p(i). The units of entropy and entropy rate defined above are bits and bits per symbol. The entropy of a sequence is the length of shortest binary description of the states of the random variable that generates the sequence, so it is the size of the most compressed description of the sequence. For additional information, see S. Lloyd. 1990. Physical measures of complexity. In E. Jen, ed.  Lectures in Complex Systems, SFI Studies in the Sciences of Complexity, Vol. II, pp. 67-73. Addison-Wesley; Claude E. Shannon. 1948. A mathematical theory of communication, Bell System Technical Journal 27:379-423, 623-656. There is a related measure of complexity, called Kolmogorov complexity, that measures the size of the small - est program necessary to produce an output. For additional information, see http://en.wikipedia. org/wiki/Entropy_(information_theory. Last accessed June 21, 2010. 84 Emmanuel J. Candès and Michael B. Wakin. 2008. An introduction to compressive sampling. IEEE Signal Processing Magazine 13(March).

OCR for page 91
seeing Photons  Data Screening Techniques The basic principle behind the application of data screening methods is to filter (screen) the raw data and extract only the data segments that are of interest. This is called relevance filtering in other contexts. Example screening techniques include automatic target detection and/or recognition and material identification, among others. In principle, the use of data screening techniques as a compression mechanism amounts to shifting the specific mission processing algorithms from the receiving ground station to the collection platform. A number of approaches can be taken once the image regions of interest have been identified. One approach is to transmit only the data associated with the regions of interest (with no loss of fidelity) and discard the remaining data. However one might choose to further reduce the volume of data either through lossy compression of the selected data or the use of derived attributes such as the identification and location of the detected targets. This compression method is typically used in unmanned ground sensors, as well as on some airborne platforms. Application-specific Processing The signal processing and exploitation chain consists of two major compo- nents; (1) conversion of sensor input into physical meaningful values that then can be further processed by (2) mission- or application-specific algorithms (e.g., missile detection and tracking, target detection, automatic target recognition, and material identification). In this report the primary focus is on the core signal pro- cessing chain that is accomplished on the collection platform and is essential for high-quality image acquisition, as shown in Figure 4-10. In the IR and high-performance visible imaging area, fully digital focal planes are just entering the market. In some cases their performance exceeds that of tra- ditional analog focal planes coupled to discrete electronics. On-ROIC digital logic enables future digital signal processing such as nonuniformity correction (NUC), image stabilization, and compression leading to much smaller systems (microsys- tems). This technology is poised to go into large-area cryogenic infrared sensors over the next few years. Local Processing Sending data off-chip requires substantial power, and sending those data through communication links (for example, an RF link for an unattended ground sensor) is even more energy intensive. As described above, most of the basic func- tionality depicted in Figure 4-10 such as AGC (automatic gain control) and TDI time delay integration is currently embedded and implemented as an integral part of the ROIC functionality. There is increasing recognition of the value of trying to

OCR for page 91
emerging technologies Potentially significant imPacts  with FPA Signal Processing Chain Non-Uniformity Scene Based FPA Dead Pixel Frame Averaging Correction Non-Uniformity Control Substitution (NUC 8P) Correction Automatic Gain Control/Histogram Temperature Drift Calibration Signal Memory Equalization/ Compensation Processing Controller Local Area Processing Output Shutter Soft Processing Control Microprocessor Compression/ Camera Link/ RS170/HDMI Supporting Logic Image FIGURE 4-10 4-10 w-replacement type.eps Components of a signal processing architecture. identify and transmit only small amounts of high-level “actionable information” rather than large numbers of raw data bits. Architectural bottlenecks occur when steps in digital signal processing are mismatched in their performance. The goal of later processing steps should be to extract as much information as possible from this sensing infrastructure. In prac- tice this means processing to remove noise, to obtain true information, and to lose as little information as possible. A bottleneck in a system can be viewed as a lossy filter. In the case of a parallel front end, it will have a number of bits of attributes, such as amplitudes, but perhaps frequencies and phases as well, available to late processing stages. For example, for a 10-megapixel array, each pixel might produce 32 bits of information per sample at a sampling rate of 30 frames per second, lead- ing to an aggregate “bit rate” of 9.6 × 109, or about 1.2 gigabytes, per second. More information per sample, more pixels, or a greater scan rate has a significant effect on the processing and communication demands of such a sensor. As a specific example, the DARPA ARGUS-IS unmanned aerial system de- scribed in Chapter 3 (Boxes 3-1 and 3-2) contains 368 visible FPAs.85 At data rates 85 Brian Leininger, Jonathan Edwards, John Antoniades, David Chester, Dan Haas, Eric Liu, Mark

OCR for page 91
seeing Photons  of 96 megapixels per second per FPA, 12 bits per pixel, and 368 FPAs, the total data rate from the sensor is about 424 gigabits per second. This data rate is beyond the capacities of conventional processing elements. In addition, ARGUS-IS uses a spread-spectrum jam-resistant CDL wireless data link of 274 megabit per sec- ond capacity. If the wireless data link is fully utilized, the on-board systems must achieve a data rate reduction of 423,936/274 or more than 1,500, which is difficult to achieve with compression technologies alone. ARGUS-IS approaches the data management challenge with a novel on-board processing architecture, characterized by parallel interconnects. Each FPA pair feeds a field-programmable gate array (FPGA) that multiplexes the data from the two FPAs (a total of 2.3 gigabits per second), time tags these data, and interleaves the data onto a fiber-optic transceiver. The transceiver is a commercial off-the-shelf (COTS) device operating at 3.3 gigabits per second. Sixteen 12-fiber ribbon cables connect the 184 FPGA pairs to the ARGUS-IS airborne processor system, a mul- tiprocessor system illustrated in Box 3-1, which consists of 32 processor modules. Each processor module can handle 6 fibers, or about 20 gigabits per second of data, and consists of two Xilinx Virtex 5 FPGAs and a low-power Intel Pentium general- purpose central processing unit (CPU). The ARGUS-IS designers believe that the processor modules can provide more than 500 billion operations per second each, for a total processing capacity for the 32 processor modules in excess of 16 trillion operations per second. To overcome some of the data rate limitations of the CDL downlink, JPEG 2000 compression is done using application-specific integrated circuits (ASICs) to provide hardware assist. The ARGUS-IS designers note the severe limitations of the 200+ megabit per second data link and propose moving target tracking into the on-board software. ARGUS-IS illustrates many of the system architecture trade-offs discussed in this chapter. As the demand is increasing for on-board processing functionality that mirrors what traditionally has been accomplished on the ground, the need for compute power that meets size, weight, and power (SWaP) constraints is continuously grow- ing. The amount of processing that can be placed right at the pixel has generally been limited by the modest numbers of transistors that can be placed within a small pixel. Significant advances have been made in computer architectures and are cur- rently referred to as high-performance computing. Commercial applications such as video gaming, increased cell phone functionality, and so forth, have been pushing significant advances in small, low-power, high-performance computing platforms Stevens, Charlie Gershfield, Mike Braun, James D. Targove, Steve Wein, Paul Brewer, Donald G. Madden, and Khurram Hassan Shafique. 2008. Autonomous Real-time Ground Ubiquitous Surveil- lance—Imaging System (ARGUS-IS). Proceedings of the SPIE 6981:69810H-1.

OCR for page 91
emerging technologies Potentially significant imPacts  with that are available on the commercial market (for example, multicore processors and multicore graphics processing units [GPUs]) and have no International Traffic in Arms Regulations (ITAR) restrictions. Figure 4-11 depicts a Texas Instruments system-on-the-chip solution for cell phone applications. The significant advances in high-performance computing over the last decade are making real-time on- board processing a reachable reality. However, the complexity of programming these powerful processors and achieving their potential computer power has significantly increased—both the exploitation of the available parallelism and the memory organization of the com- putation are subtle and require significant effort. A key enabler for exploiting this emerging computational power is the new sets of software tools that enable rapid porting and debugging of existing algorithms into multicore computing platforms. As discussed in the following two examples, extensive sets of dedicated software tools are emerging in support of the hardware platforms. FIGURE 4-11 Texas Instruments OMAP 4430 system on a chip. SOURCE: Courtesy Texas Instruments. Available at http://focus. ti.com/general/docs/wtbu/wtbuproductcontent.tsp?templateId=6123&navigationId=12843&contentId=53243. Last Accessed March 25, 2010.

OCR for page 91
seeing Photons  NVIDIA multicore technology is one example of commercially available high- performance computing technologies. In 2010, NVIDIA announced the launch of its next generation Tegra, a multiprocessing system focused on the mobile web.86 The Tegra has eight independent processors, including a dual-core CPU for mo- bile applications. These processors are used together or independently to optimize power usage. At another point on the performance scale, NVIDIA has also introduced the capability to perform petascale computing with teraflop processors. The NVIDIA Tesla C1060 computing processor provides energy-efficient parallel computing power by bringing the performance of a small cluster to a workstation environ- ment. The CUDA programming environment (“C” programming language, en- hanced with thread management primitives to inform the runtime for the GPU of what can be executed concurrently) simplifies the development of applications for its multiple 240-core processors. Another example that has a significantly lower power profile is the series of multicore chips and boards provided by Tilera. The first generation Tilera TILE64 commercial chip is laid out as an 8 tile × 8 tile interconnect using a pipe-lined and programmable two-dimensional, proprietary, high-performance, low-latency mesh. The mesh can transport streams of data, memory blocks, or scalar values, adding to the flexibility of the programming models available for the chip. In ad- dition, this approach can support arbitrary numbers of tiles, so that 8 × 8 in this initial chip is not a fixed configuration. This makes the architecture particularly suitable for radiation hard applications where a different number of tiles per chip may be required for power dissipation and several other technical reasons. The switch engine in each tile completely offloads the tile processing engine from iMesh™ network routing and protocol handling, and provides buffering and flow control so that tiles can perform processing in an asynchronous manner. Each network link is full duplex. The dynamic networks are routed in a tile-layout fash- ion (x-direction first, then y-direction). A RadHard by design chip (MAESTRO) is currently being developed under the National Reconnaissance Office-funded OPERA program. The third generation of Tilera Corporation’s multicore processor is aimed at delivering the highest available general-purpose compute at the lowest power consumption. The TILE100 will provide a 4× to 8× increase in performance over Tilera’s current TILEPro64 processor and will double the performance-per-watt metric. Table 4-2 provides a comparison between the Tile64, the Tile100, and the MAESTRO processors. 86 2010 International Consumer Electronics Show (CES), Las Vegas, Nevada.

OCR for page 91
emerging technologies Potentially significant imPacts  with TABLE 4-2 Comparison Between the Tile64, the Tile100, and the RadHard by Design Processors Performance TILEPro64 Tile100 (Greylock) Maestro (RHBD) Number of cores 64 100 49 Temperature range 0°C-70°C 0°C-70°C –55°C-125°C Foundry TSMC TSMC IBM Feature size 90 nm 40 nm 90 nm On-chip cache (MB) 5.6 32 4.3 Floating point operations ~10 29 22 (GFLOPS) (software emulated) (FPU accelerator) (IEE 754 FPU per each core) On-chip bandwidth 38 232 16 (Tbit/s) Clock speed (MHz) 700, 866 1250, 1750 300 Typical power (W) 27 <50 (estimated) 18 (estimated) Total I/O bandwidth 40 44 40 (Gbps) Ethernet bandwidth 2 XAUI, 2GbE 2 XAUI, 2GbE 4 XAUI, 4GbE NOTE: FPU = floating point unit; I/O = input-output. SOURCE: Data derived from http://www.tilera.com/products/TILEPro64.php; http://www.tilera.com/ products/TILE-Gx.php; http://nepp.nasa.gov/mapld_2009/talks/083109_Monday/03_Malone_Michael_mapld 09_pres_1.pdf. Accessed March 25, 2010. Tilera provides a Multicore Development Environment (MDE). The MDE tool kit provides a complete Integrated Development Environment (IDE).87 FINDING 4-7 Scaling the data throughput of focal plane sensor systems involves not only the sensor chip but also the detector-processor interface, signal processing and compression, and the communication link (wireless for remote air- and space-borne missions). Advanced compression and filtering with on-board processing provided by commodity multicore architectures are reducing com- munications demands. 87 Standard Eclipse-based IDE; ANSI standard C compiler (see Section 6.0) and C++ compiler; Multi-tile cycle-accurate simulator; Whole chip debug and performance analysis; Complete SMP Linux 2.6 environment—standard runtime environment and command line tools; ILib library for efficient intertile communications; PCIe hardware development platform support; Linux and Win - dows host environments.

OCR for page 91
seeing Photons 0 RECOMMENDATION 4-2 Analyses of national capabilities should include consideration of advances in processing technologies for other uses—for example, commercial develop- ments—that could also enhance the use of detectors in future sensor systems. Multisensor Data Fusion Performance enhancements can be achieved when combining data collected with IR sensors with additional sensor modalities. Multisensor data fusion88,89,90 has been an evolving set of architectures and algorithms for drawing inferences from a multiplicity of sensors used in combination. Any given multisensory sys- tem will have an architecture that defines its ultimate capabilities, and then a set of algorithms will be used to draw the necessary inferences from the fused data. Algorithms for sensor fusion include techniques such as Bayesian inference,91 Dempster-Shafer 92,93 evidential reasoning, and voting. The specific techniques depend on both the mission or application and the specific sensor modalities. Sample applications include image enhancement for improved navigation in low- light conditions, target detection, and target recognition. An example is the fusion of data from a pulsed radar and an IR sensor, as shown in Figure 4-12.94 The radar can determine range, but not angular direction, while the forward-looking infrared (FLIR) can determine angular direction, but not range. By fusing the two sensor modalities, both range and angular direction can be determined. Three specific methods for sensor fusion include raw data fusion, feature- level fusion, and decision-level fusion. The highest level of fusion occurs when the multiple images are combined into a single multivalue image that is then exploited. This requires accurate alignment of sensor measurements across the multiple sensors resulting in a vector of multimodal measurements associated with a common ground location. The combined multivalue image is then processed by an application-specific algorithm, such as automatic target cuing-recognition 88 David L. Hall and James Llinas. 1997. An introduction to multisensory data fusion. Proc. IEEE 85(1). 89 David L. Hall and Sonya A.H. McMullen. 2004. Mathematical Techniques in Multisensor Data Fusion, 2nd Edition. Artech, ISBN 978-15805333355. 90 Lawrence A. Klein. 2004. Sensor and Data Fusion: A Tool for Information Assessment and Decision Making. SPIE ISBN 978-0819454355. 91 Lawrence A. Klein. 2004. Sensor and Data Fusion: A Tool for Information Assessment and Decision Making. SPIE ISBN 978-0819454355. 92A.P. Dempster 1968. Generalization of Bayesian inference. J. Royal Statistical Society 30:205-247. 93 G. Shafer. 1976. A Mathematical Theory of Evidence. Princeton, N.J.: Princeton University Press. 94 David L. Hall and James Llinas. 1997. An introduction to multisensory data fusion. Proc. IEEE 85(1).

OCR for page 91
emerging technologies Potentially significant imPacts  with RADAR FLIR Target Report LOS Azimuth Target Report LOS Uncertainty Azimuth Uncertainty Slant Range Uncertainty Radar Absolute Target Slant Range Uncertainty Report Uncertainty Region Target Elevation Elevation Report Uncertainty Uncertainty FLIR Absolute Uncertainty Region Absolute Uncertainty Region Intersection COMBINED FIGURE 4-12 4-12 w-replacement type.eps Fusion of data from a pulsed radar and an IR sensor. SOURCE: David L. Hall and James Llinas. 1997. An Intro - duction to multisensory data fusion. Proc. IEEE 85(1). (ATC/R), that simultaneously operates on the vector of measurements. Fusion tech- niques that operate in this mode are known as raw data fusion as well as centralized data fusion methods, and they typically assume a common image projection plane for the multiple sensors. Multi- and hyperspectral sensors are well matched for this type of fusion approach.95,96 Centralized data fusion is not typically applied to sensors that do not share a common imaging plane such as synthetic aperture radar (SAR).97 95 G.Shafer. 1976. A Mathematical Theory of Evidence. Princeton, N.J.: Princeton University Press. 96 Tamar Peli, Ken Ellis, Robert Stahl, and Eli Peli. 1999. Integrated color coding and mono - chrome multi-spectral fusion. In Detection and Ccountermeasures: Infrared Detection and Detectors Conference. 97 M. Aguilar, D.A. Fay, W.D. Ross, A.M. Waxman, D.B. Ireland, and J.P. Racamato. 1998. Real-time fusion of low-light CCD and uncooled IR imagery for color night vision. SPIE 3364.

OCR for page 91
seeing Photons  Feature Multispectral Extraction Discrimination ATC Multispectral Image Object Correlation Fused Feature-level Confidence Correlation Verification Features Fusion Measures Final SAR Image Detections ATC SAR Feature Discrimination Extraction FIGURE 4-13 Improved ATC through feature-level fusion of SAR and multispectral imagery. SOURCE: T. Peli, M. Young, R. Knox, Ellis, K., and F. Bennett. 1999. Feature level fusion. Proc. SPIE Sensor Fusion: Architecture, Algorithms and Applications III, Vol. 3719. Orlando, Fla.4-13 redrafted In contrast to raw data fusion, feature- and decision-level fusion methods do not require precise alignment. In a decision-level fusion, also known as distributed data fusion, an independent decision is made based on a single data modality and the decisions are passed to the fusion node where a global decision is made using a variety of algorithms including Bayesian inferencing. Feature-level fusion is a hybrid between raw data fusion and decision-level fusion. In feature-level fusion,98 each sensor output is processed independently, and attributes associated with events or entities of interest that have been extracted in each sensor domain are combined and a decision is made based on the joint feature set. Figure 4-13 depicts improved ATC/R through feature level fusion of SAR and multispectral imagery. CONCLUDING THOUGHTS Emerging technologies could enable new capabilities, with examples being the ability of some advanced detector technologies to enable multispectral sensing through a single aperture and the potential advantages in selectivity accruing from nanophotonics. Advances in non-detector-specific technologies, such as commu- nications technology and signal processing technologies, have a direct bearing on the ability to turn detector data into useful information. As an example, the effects of miniaturization and parallel processing have made commodity components 98A.M. Waxman, M. Aguilar, R.A. Baxter, D.A. Fay, D.B. Ireland, J.P. Racamoto, and W.D. Ross. 1998. Opponent-color fusion of multi-sensor imagery: visible, IR and SAR. Proceedings of the  Conference of the IRIS Specialty Group on Passive Sensors.

OCR for page 91
emerging technologies Potentially significant imPacts  with sufficiently powerful to provide onboard processing capabilities for a gigapixel capability. Both the basic science underlying advances in detectors, optics, coolers, and algorithms and the commodity processing capabilities enabling new trades in signal processing are available worldwide. Emerging detector and related system-level technologies have significant potential for advancing sensor systems and deserve attention from the intelligence community in assessing present and future global sensor system capabilities.