National Academies Press: OpenBook

Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays (2010)

Chapter: 2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors

« Previous: 1 National Security Context of Detector Technologies
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×

2
Fundamentals of Ultraviolet, Visible, and Infrared Detectors

INTRODUCTION

Electro-optical detectors are used to measure or sense the radiation emitted or reflected by objects within the detector’s optical field of view (FOV). Passive systems operate without any illumination of the object by the observer, relying on either self-luminosity (for example, a hot rocket exhaust) or reflection-transmission of ambient light. In active systems, the observation is associated with irradiation of the scene (as in a camera flash) in the spectral region of interest. A detector converts incident radiation to an electrical signal that is often proportional to the incoming intensity. This electrical signal is processed, usually digitally, transmitted, and/or stored. A two-dimensional array of detectors, called a focal plane array (FPA), is often placed at the focal plane of an optical system so that the spatial variation of the incident intensity is recorded as an image. There are many excellent texts at both introductory and advanced levels that deal with the fundamentals and applications of ultraviolet (UV), visible, and infrared detectors.1,2 The committee’s intent is to provide a brief introduction to facilitate reading the material that follows.

1

E.L. Dereniak and G.D. Boreman. 1996. Infrared Detectors and Systems. Hoboken, N.J.: John Wiley and Sons.

2

S. Donati. 2000. Photodetectors: Devices, Circuits and Applications. Saddle River, N.J.: Prentice-Hall.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×

SOURCES

Sources include self-emission from hot objects that generally follow a black-body radiation curve depending on the temperature of the source, modified by the spectral emissivity of the object. Alternatively for passive sensors, the reflection or transmission modification of ambient sources can be detected. During daytime, the dominant source in the visible is the sun. There is a significant night glow in the spectral region around 1.5 μm that makes short-wavelength infrared (SWIR) imaging an alternative to visible image intensifier night vision goggles for some night vision applications.3,4 The semiconductor absorbance ranges that enable passive night vision and the nightglow irradiance spectrum are illustrated in Figure 2-1. The peak of the room temperature blackbody curve is at about 10 μm in the infrared.

TRANSMISSION

Spectral Regions

Over the years a number of designations for spectral regions have become somewhat standard, but there is significant overlap and it is useful to define the regions used in this report to assist the reader (see Table 2-1). The transitions between these regions are not sharply defined and the designations are to be interpreted loosely; the detection mechanisms, the transmission, and the dominant noise sources all vary across these bands. These definitions help in discussing those variations cohesively.

Electromagnetic sensors cover the entire range from 200 nm to 20 μm and beyond; this taxonomy is intended merely to provide a nomenclature for the most frequently used bands for long-range imaging.

Atmospheric Transmission

Atmospheric transmission is an important aspect of any terrestrial remote sensing application. Figure 2-2 shows the atmospheric transmission across the 0.2-20 μm region (~1 km horizontal path length at sea level, temperature = 15°C, with 46 percent relative humidity) along with the wavelength bands defined above.

While Figure 2-2 is representative, the transmission curve will vary with atmospheric conditions, as well as the path taken through the atmosphere; for example,

3

T.R. Hoelter and B.B. Barton. 2003. Extended short wavelength spectral response from InGaAs focal plane arrays. Proceedings of SPIE 5074:481-490.

4

Available at http://www.sensorsinc.com/downloads/paper_HighSpeedSWIRImagingAndRangeGating.pdf. Last accessed March 25, 2010.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
FIGURE 2-1 Nightglow irradiance spectrum under different moonlight conditions. SOURCE: Vatsia, L.M. 1972. Atmospheric optical environment. Research and Development Technical Report ECOM-7023. Prepared for the Army Night Vision Lab, Fort Belvoir, Va.

FIGURE 2-1 Nightglow irradiance spectrum under different moonlight conditions. SOURCE: Vatsia, L.M. 1972. Atmospheric optical environment. Research and Development Technical Report ECOM-7023. Prepared for the Army Night Vision Lab, Fort Belvoir, Va.

TABLE 2-1 Definition of Spectral Regions with Long-range Atmospheric Transmission

Designation

Wavelength Band (μm)

Physical Significance and Comments

Solar blind UV

0.2-0.28

Solar radiation in this band is blocked by the Earth’s ozone layer, so any radiation in this region is likely man-made. Once under the ozone layer, the atmosphere is transparent to wavelengths as short as ~200 nm where oxygen absorption limits the transmission

UV

0.28-0.4

Atmosphere is transparent

Visible

0.4-0.7

Peak of solar spectrum

Near infrared

0.7-1.0

Long-wavelength cutoff defined by silicon detector response

SWIR

1.1-2.7

Overlaps with telecommunications wavelengths; large commercial infrastructure available at 1.3 and 1.55 μm

MWIR

2.7-6.2

Atmospheric transmission window, molecular vibrational absorptions

LWIR

6.2-15.0

Atmospheric transmission window, molecular vibrational absorptions

VLWIR

15.0-20.0

Molecular vibrational absorptions

NOTE: LWIR = long-wavelength infrared; MWIR = mid-wavelength infrared; VLWIR = very long wavelength infrared.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
FIGURE 2-2 Display of the atmospheric transmittance levels. SOURCE: Data from the Santa Barbara Research Institute, a subsidiary of Hughes, and OMEGA Engineering, Inc. Available at http://www.coseti.org/atmosphe.htm. Accessed March 29, 2010.

FIGURE 2-2 Display of the atmospheric transmittance levels. SOURCE: Data from the Santa Barbara Research Institute, a subsidiary of Hughes, and OMEGA Engineering, Inc. Available at http://www.coseti.org/atmosphe.htm. Accessed March 29, 2010.

looking through the atmosphere from a space-based platform will differ in the details.

FINDING 2-1

For any sensor application, the relevant spectral range is set by the overlaps of the spectral signature of the target and the pass bands of the transmission medium between the target and the detector.

DETECTION

In general, detectors are divided into two classes: thermal and photon (or quantum).5 Thermal detectors operate by the absorption of incoming radiation causing a change in temperature of the detector and by the sensitivity of some measurable parameter—for example, resistance—to that temperature. Thermal detectors are typically sensitive across a wide range of incident wavelengths. Quantum detectors depend on the direct interaction of the incoming light with the detector materials, resulting, for example, in electron-hole pair creation in a semiconductor. Photo-generated carriers can be measured by directly measuring charge collected during an integration period, by measuring photocurrent, by a change in resistance (photoconductive), or by voltage generation across a junction (photovoltaic).

5

R.L. Petritz. 1959. Fundamentals of infrared detectors. Proceedings of IRE 47(9):1458-1467.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×

Thermal Detection

In thermal detectors, photon absorption leads to a small temperature rise of the detector, which is sensed by a temperature-dependent property of the material such as a pyroelectric effect or a temperature-dependent resistance. An advantage of using thermal detectors is that they typically are very broadband; a disadvantage is that it is a challenge to make a structure that has measurable temperature rise for low power signals.6 It is also possible to track thermoelectric effects using thermocouples and thermopiles or with the aid of Golay cells that can track thermal expansion in a system. In general, there is a trade-off between the response speed of a thermal detector and its sensitivity. Thermal isolation allows longer integration times to detect weaker signals, but this means that the detector response time is necessarily increased.

Quantum Detection

Quantum or photon detectors, typically semiconductors with bandgaps matched to the photon energy, operate by the generation of electron-hole pairs by the absorption of a photon. There are two major classes of photon detectors: photoconductive and photovoltaic.

Photoconductors

In a photoconductor, the excited carriers are detected through the change in resistance induced by the photoexcited carriers. Often the mobilities of electrons and holes are quite different in the semiconductor, with the consequence that the faster carrier can transit the detector several times before the carriers recombine. This provides a gain mechanism.

Photovoltaic Detectors

In a photovoltaic device, the photoexcited electron and hole are separated by the built-in field associated with a p-n junction and collected. Particularly for indirect bandgap semiconductors, such as silicon, the absorption region has to be extended to ensure a good quantum efficiency leading to p-i-n designs. There is usually a trade-off between extending the absorption region for high probability

6

B. Cole, R. Homing, B. Johnson, K. Nguyen, P.W. Kruse, and M. C. Foote. 1994. High performance infrared detector arrays using thin film microstructures. Proceedings of the Ninth IEEE International Symposium on Applications of Ferroelectrics 653-656.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×

of absorbing a photon and shortening it to ensure that recombination mechanisms do not impact the collection efficiency.

Avalanche Photodiodes

Avalanche photodiodes incorporate high-field regions that lead to carrier multiplication to increase signal levels above the characteristic noise sources downstream in the electronics. The carrier multiplication is accomplished by imparting sufficient kinetic energy to a carrier for it to create an additional electron-hole pair by impact ionization. There is always some excess noise associated with the multiplication, but this can be minimized by designs that allow primarily one carrier to be multiplied while suppressing the multiplication of the oppositely charged carrier.

INFORMATION ENCODED BY PHOTONS

A photon is the quantum mechanical element of all electromagnetic radiation. Photon energy is given by

where h is Planck’s constant, c is the speed of light, and λ is the wavelength of the infrared photon in micrometers. By collecting photons, measurements can be made of light's intensity, temporal variations in intensity, spectrum, polarization, electric field phase, incident angle, and photon time of arrival. These types of measurements will now be defined in greater detail.

Intensity

Intensity, the incident power per unit area, is the most commonly used optical imaging signal. The variation of signal intensity across the focal plane is recorded as a gray-scale image.

Spectrum

Images can be panchromatic, monochromatic, multispectral (including three-color traditional RGB [red, green, blue]), or hyperspectral (multiple spectral bands across the wavelength range of interest). Spectral information can be obtained in several ways, including dispersion into different pixels (using diffraction or refrac-

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×

tion), temporal modulation of spectral filters, use of on-chip absorptive filters, measuring the size of a charge packet (for example, for X-ray energy spectroscopy), or varying the bandgaps of multiple photon absorbing regions.

Polarization

Imagers have been developed to measure the complete polarization states of the electromagnetic field described by the Stokes parameters or, more commonly, the linear polarization components.7

Dynamics

Time scales can range from still imaging, to video rates, to fast (e.g., kilohertz) amplitude fluctuations due to target phenomenology, to high-speed imaging (e.g., megahertz), to acquiring sub-ns (nanosecond) range information from single photons for active LADAR (laser detection and ranging) imaging.

Time Delay

The time delay from an excitation to the reception of a photon provides a measure of distance to the object in the same way as in a radar receiver. This is an active sensor application and is beyond the scope of this study. However, it is worthwhile to note that advances in both ultrafast sources and high-speed photon counting detectors will make available in the visible and near-infrared (NIR) spectral regions many of the advanced radar concepts, such as chirped pulses and synthetic imaging concepts, that have been so successful in longer-wavelength spectral regions.

Imagers are available with many different designs and architectures to exploit these different characteristics of optical signals, but it is difficult to design a single imager that is optimized for simultaneously measuring all of these attributes.

Phase and Incidence Angle

Imagers can perform heterodyne or other types of carrier-phase detection. High-speed detectors can allow detection of temporal-phase variation by measuring the beat frequency between a local oscillator and a return signal. Alternatively, wavefront sensors are used to measure spatial-phase variation, allowing analysis of atmospheric wavefront distortions for adaptive optical correction. These sensors enable measurement of a small phase distortion in optical waves under significant

7

For additional detailed information on Stokes polarization parameters, please see http://spie.org/x32376.xml. Last accessed on May 6, 2010.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×

background wavefront aberrations. A Shack-Hartmann sensor is a frequently used wavefront sensor that consists of a microlens array in front of a multiple element detector or focal plane array to register the local wavefront tilt in the position of the imaged spots from each microlens on the sensor array.8

FINDING 2-2

While the spatial variation of signal intensity is most often the quantity evaluated to produce an image, spectral distributions, polarization, phase, and temporal characteristics are additional information channels that can be exploited in some applications.

THE LIMITS IMPOSED BY DIFFRACTION

Spatial Resolution

Optical imaging systems can have limitations in resolution caused by imperfections in the lenses or by their misalignment, which results in defects of the image and is often referred to as an optical aberration. In addition, for transmission through the atmosphere, variations of the index of refraction due to air currents and temperature variations also cause changes in the image. Aberrations describe the amount by which a geometrically traced ray misses a specific location in the image.9 If all of these aberrations are dealt with, diffraction is the ultimate limit of optical focusing. For an aberration-free optical system with uniform illumination of a circular input aperture, the result at the focus is a bright central disk surrounded by a series of concentric rings of rapidly diminishing amplitude. This is known as an Airy pattern (shown in Figure 2-3), and the diameter of the central disk is given by ~1.22λ/NA, where λ is the wavelength and NA the numerical aperture of the optical system (the half-angle of the light acceptance cone).10 Mathematically, the intensity versus position in the Airy pattern is given by

8

J. Schwiegerling and D.R. Neal. 1994. Historical development of the Shack-Hartmann wavefront sensor. J Opt Soc Am A 11:1949-1957.

9

Harold Rothbart and Thomas H. Brown. 2006. Mechanical Design Handbook: Measurement, Analysis, and Control of Dynamic Systems, Second Edition. New York: McGraw-Hill Companies, Inc.

10

NA is related to the F/# notation commonly used to describe the light acceptance cone in photography. NA = sin θ, where light incident on the lens at angles up to θ is imaged onto the focal spot. In terms of the diameter of the lens, D, and the focal length, f, tan θ = D/2f. F/# = f/D, so for small angles where sin θ ≈ tan θθ, NA ≈ 1/[2F/#].

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
FIGURE 2-3 The Airy disk. SOURCE: Figure courtesy of Cambridge in Colour. Available at http://www.cambridgeincolour.com/tutorials/diffraction-photography.htm. Accessed on March 29, 2010.

FIGURE 2-3 The Airy disk. SOURCE: Figure courtesy of Cambridge in Colour. Available at http://www.cambridgeincolour.com/tutorials/diffraction-photography.htm. Accessed on March 29, 2010.

where r is the radial coordinate and J1 is the first-order Bessel function, with a first zero at m = 0.61.

As is very well known, the minimum focal spot diameter also sets the separation distance at which two point objects can be resolved as distinct. The Rayleigh resolution criterion is obtained by setting the minimum separation of two objects equal to the radius of the Airy disk,11

Detector optical systems capable of producing images with angular resolutions that are as good as the instrument’s theoretical limit are said to be diffraction limited.

For an ideal circular aperture, the two-dimensional diffraction pattern, the Airy disk, is used to define the theoretical maximum resolution for the optical system. When the diameter of the disk’s central peak becomes large with respect to the size of the pixel in the FPA, it begins to have a visual impact on the image.

OPTICAL SYSTEMS

Numerical Aperture and Field of View

The numerical aperture, NA = sinθ ≤ 1, describes the light collection power of the optical system. A larger NA results in higher resolution (see equation above) and, therefore, requires more pixels in the focal plane array if the same area is to

11

J.W. Strutt (III Lord Rayleigh). 1879. Investigations in optics, with special reference to the spectroscope. Monthly Notices of the Royal Astronomical Society 40:254.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×

be imaged at this higher resolution. The field of view is the extent of the imaged region on the focal plane array referred back to the objects being imaged.

Curved Focal Planes

Everyone is familiar with one optical system that uses a curved focal plane array, namely the human eye. Nature chooses this curvature because it makes the optics much simpler. In contrast, the many optical elements in, for example, a standard camera lens are required to faithfully reproduce the image on the flat focal plane of the camera. Our materials technology, which relies on epitaxial crystal growth and the accompanying device fabrication technologies and has largely derived from planar silicon integrated circuit technology, make curved focal plane arrays a difficult option. Recently there has been significant work, particularly in visible systems based on silicon materials to adapt to curved focal surfaces.12 A flat surface can conformally map to a cylinder, but it cannot map to a sphere without deforming. Practical curved focal plane technologies are making a significant difference in image capture and in the size and weight of optical systems.

DETECTIVITY

The noise-equivalent power (NEP) is the input power at which a detector exhibits a signal-to-noise ratio of unity. The detectivity, D, is the inverse of NEP; this of course depends on the detector area (A) and the detection bandwidth (BW). For observation of an extended object, the signal scales as the area, while the noise associated with the dark current scales as the noise also scales as These simple extrinsic parameters can be eliminated with a simple normalization; the resulting parameter is which is more characteristic of detector material performance.

Quantum Efficiency

The signal level at the detector is directly proportional to the probability that an incident photon results in an electrical signal; this is known as the quantum efficiency (QE). The external quantum efficiency includes effects such as reflection from optical surfaces that can be addressed with additional engineering (for example, antireflection coatings), while the internal quantum efficiency is more characteristic of the detector material and device geometry.

The prerequisite for quantum efficiency is absorption of a photon leading

12

R. Dinyari, S-B. Rim, K. Huang, P.B. Catrysse, and P. Peumans. 2008. Curving monolithic silicon for non-planar focal plane array applications. Applied Physics Letters 92:091114.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×

to some, typically electronic, change in the material such as the creation of an electron-hole pair in a semiconductor. A high absorption coefficient allows thinner material, which facilitates the second component of the quantum efficiency—sensing the electron-hole pair. In a photovoltaic detector this is accomplished by separating the carriers across a p-n junction resulting in a voltage proportional to the number of carriers. This process can be disrupted by recombination, either radiative or nonradiative, before the carriers diffuse into the junction region. In a photoconductive detector, the carriers are sensed as change in the conductance, which is measured as the current for a fixed voltage applied across the device. If the carriers cycle more than once through the detector before recombination, there is a gain associated with the detection that can make it easier to overwhelm noise further downstream in the electronics.

Noise

There are many noise sources whose relative importance varies with the detector material properties, the ambient temperature, the detector operating temperature, the device design, the readout electronics, and other variables. Some of the most important sources are catalogued here. Since these noise sources are in general uncorrelated, the total noise is proportional to the square root of the sum of the squares of the individual noise sources.

Photon Statistics and Background-limited Infrared Detection

There is noise associated with the signal itself. Since photodetection is a discrete process, and most natural sources exhibit Poisson statistics in the fluctuations of the signal level, this noise scales as the square root of the signal level. Photon noise is unavoidable for natural signals and sets a fundamental noise floor. For an extended source (image structure large compared to an individual pixel) the current scales as the pixel area, so the noise is

For engineered sources, it is possible to reduce the shot noise at the expense of increased phase fluctuations, and vice versa. Collectively these are known as squeezed states and have been investigated for communications applications.

Any background photons impinging on the detector also contribute to the noise. While the background is usually not an issue in the UV and visible, in the infrared there is substantial background flux associated with blackbody emission from a room-temperature scene. The peak of the 300 K blackbody emission is in the middle of the LWIR at 10 μm. For cooled infrared detectors (discussed below) this dark current associated with the background radiation and the accompanying noise levels often set the detection limit. This is known as background-limited infrared photodetection (BLIP). Many current infrared systems are close to BLIP;

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×

thus, further improvements in detector dark currents will have little impact on performance. Of course there are many scenarios other than looking at a terrestrial scene, and these have other, often more sensitive, BLIP limits. For example, looking up, a cold sky has a lower BLIP limit, requiring lower detector noise, and space-based cross-link applications have very low backgrounds. There is increasing interest in multispectral and hyperspectral sensing. The spectral filtration inherent in these concepts also reduces the background contribution to the noise.

Dark Current

Both photovoltaic and photoconductive detectors are biased under operating conditions and exhibit some dark current even in the absence of illumination. This dark current is usually proportional to the pixel area. Since the dark current is carried by discrete charges (electrons and holes), there is shot noise scaling as associated with this dark current. This dark current noise is the dominant limitation in many detectors. The sources of dark current include Johnson noise and generation-recombination noise.


Johnson Noise For a photoconductive detector, one component of the dark current is associated with the dark resistance of the detector. This is known as Johnson or Nyquist noise (kTC noise); the dark current is given by

where k is Boltzman’s constant, T is the device temperature, R0 is the detector resistance (slope of the I-V curve for a photovoltaic device), A the detector area, and BW the electrical bandwidth. The noise current is written in this form since the factor is eliminated in evaluating its contribution to D*.


Generation and Recombination Noise For infrared detectors, the fluctuations in thermally generated carrier densities in the active region also contribute to the noise; this noise source is significant if thermal energies (kT) are comparable to the semiconductor bandgap. This noise source is generally negligible for UV and visible detection. Cooling the detector also eliminates this noise source, but it can be the dominant noise source for uncooled devices. A relatively new device concept, which relies on bandgap engineering concepts available in III-V epitaxial growth, is an nBn (and variants including pMp) geometry that incorporates a barrier layer for the majority carrier in place of the traditional p-n junction carrier separation region, but does not impede the conduction of minority carriers. The result is to

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×

block generation-recombination (G-R) noise currents.13,14 In principle, this can dramatically reduce or eliminate G-R noise and allow improved detector performance including higher-temperature operation.

Readout Noise

New very low power monolithic high-speed analog-to-digital converters have advanced the state-of-the-art noise performance in IR sensors. This can allow BLIP-limited performance to be achieved over a broad range of operational conditions (which means that performance purely from a sensitivity standpoint has plateaued for these conditions). True 14-bit performance can be achieved at pixel rates of more than 20 megapixels per second per channel and noise floors of 0.325 count (approximately 50 μVrms), all while exposed to full-EMI (electromagnetic interference) environments. High-speed ADCs (analog-to-digital converters) allow for oversampling techniques that were previously not possible in 14-bit resolution. ADCs are now available in Quad and Octal packages with high-speed serialized outputs, eliminating hundreds of wires and field-programmable gate array (FPGA) pins when connected to large-format FPAs, further increasing integration and reducing power. Single-board designs with 64 video channels of high-speed, 65 million samples per second, ADCs allow 2K × 2K arrays to run at 30 Hz video rates achieving an overall processing bandwidth of more than 125 megapixels per second. These digitized video data can then be transmitted at 5 gigabits per second using a common LVDS (low-voltage differential signaling) interface over 15 m of cable.

Multiple standard video interface protocols are currently supported such as FPDP (front-panel data port), Camerlink, Ethernet, Hotlinks, and LVDS. Additionally, newer standards are coming into favor for IR sensors such as HDMI (high-definition multimedia interface), Gigabit Ethernet, FireWire, and USB 3.0. These interfaces allow for easier system integration and can support large-format arrays.

Other Sources of Noise

There are many other sources of noise of varying degrees of importance in specific situations. Some of these include low-frequency (1/f) noise, temperature

13

S. Maimon and G.W. Wicks. 2006. nBn detector, an infrared detector with reduced dark current and higher operating temperature. Applied Physics Letters 89:151109.

14

Binh-Minh Nguyen, Siamak Abdollahi Pour, Simeon Bogdanov, and Manijeh Razeghi. 2010. Minority electron unipolar photodetectors based on type II InAs/GaSb/AlSb superlattices for very long wavelength infrared detection. Proc SPIE 7608:760825-1.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×

fluctuations that change the parameters such as dark current of the device, microphonics, and impurity ionization associated noise (Barkhuesen noise).

BRIEF SURVEY OF DETECTORS BY SPECTRAL REGION

Ultraviolet

For applications where both UV and visible responses are desired, silicon photodiodes are very good UV detectors. For wavelengths below 360 nm, silicon exhibits a direct bandgap resulting in a very strong absorption. Traditional vertical p-n junction devices can be inefficient in this wavelength region if the absorption occurs in the heavily doped contact layers, before the photons can penetrate to the junction region. In-plane devices such as Schottky barrier detectors, in which the transport is parallel to the semiconductor surface, provide an alternative geometry that also has the advantage of very high speed as a result of the low capacitance of the structure.

Solar Blind

Because of the low natural background in the solar blind region (λ < 280 nm), photodetectors and focal plane array imagers operating in this range allow for a number of unique applications—generally any terrestrial 280 nm radiation can be assumed to arise from man-made sources. Currently, most solar blind imaging is performed with either a photocathode and microchannel plate combination or a UV-enhanced silicon photodiode with a band-pass filter. Neither of these options is ideal: the photocathode and microchannel plate combination is a fragile vacuum tube device requiring a high-voltage power supply, while the silicon photodiode is not intrinsically solar blind and suffers from increased size and complexity and decreased efficiency due to the optical filtering requirement. Technological and scientific advances in high-aluminum-composition AlGaN-based semiconductor materials have led to the development of visible blind p-i-n photodiode FPA cameras15,16,17,18 and a renewed interest in the development of intrinsically solar

15

J.D. Brown, Zhonghai Yu, J. Matthews, S. Harney, J. Boney, J. F. Schetzina, J. D. Benson , K. W. Dang, C. Terrill , Thomas Nohava, Wei Yang, and Subash Krishnankutty. 1999. Visible-blind UV digital camera based on a 32 × 32 array of GaN/AlGaN p-i-n photodiodes. MRS Internet Journal of Nitride Semiconductor Research 4(9):1-6.

16

J.D. Brown, J. Matthews, S. Harney, and J. Boney. 1999. High-sensitivity visible-blind AlGaN photodiodes and photodiode arrays. MRS Internet Journal of Nitride Semiconductor Research 5S1(W1.9).

17

B. Yang, K. Heng, T. Li, C. J. Collins, S. Wang, R. D. Dupuis, J. C. Campbell, M. J. Schurman, and I. T. Ferguson.2000. 32×32 Ultraviolet Al0.1Ga0.9N/GaN p-i-n photodetector array. Quantum Electronics Letter, 36(11):1229.

18

J.D. Brown, J. Boney, J. Matthews, P. Srinivasan, and J.F. Schetzina.2000. UV-Specific (320-365

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×

blind FPA cameras. The first solar blind FPA camera was reported by BAE systems in 2001.19 The first images from a solar blind FPA camera were published in 2002, but the quality was lacking and the FPA did not provide full frame imaging.20 The first 320 × 256 imaging was reported in 2005.21 The only recent reports of solar blind FPAs are from a Chinese group.22,23

Visible

Visible detectors are broadly divided into charge-coupled device (CCD) imagers and complementary metal oxide semiconductor (CMOS) imagers. Prior to discussing each of these in greater detail, the performance and operation of CCD and CMOS technologies will be contrasted. In addition, avalanche photodetectors are important for low-light-level applications.

In general, CCD and CMOS technologies can share much of the same processing equipment, and both have benefited from Moore’s law scaling advances (see Figure 2-4); however, the detailed process flows for CCDs and CMOS evolved separately and with different requirements. During the evolution of CMOS technology it became possible to implement reasonable-quality imagers that had some advantages and disadvantages over the already mainstream CCD imaging technology.

Initial claims were that the increasing performance of CMOS imagers, driven by rapid Moore’s law progress, as well as the level of integration available (which enabled single-chip solutions) and the low cost of commodity CMOS processes, would rapidly make CCD imaging technologies obsolete. CCDs were perceived as requiring “specialized” processes, implemented in low volumes on less cost-

nm) digital camera based on a 128×128 focal plane array of GaN/AlGaN p-i-n photodiodes. MRS Internet Journal of Nitride Semiconductor Research 5(6).

19

P. Lamarre, A. Hairston, S.P. Tobin, K.K. Wong, A.K. Sood, M.B. Reine, M. Pophristic, R. Birkham, I.T. Ferguson, R. Singh, C.R. Eddy, Jr., U. Chowdhury, M.M. Wong, R.D. Dupuis, P. Kozodoy, and E.J. Tarsa. 2001. AlGaN UV Focal Plane Arrays. Physica Status Solidi (A), 188(1):289.

20

J.P. Long, S. Varadaraajan, J. Matthews, and J.F. Schetzina. 2002. UV detectors and focal plane array imagers based on AlGaN p-i-n photodiodes. Opto-Electronics Review 10(4):251.

21

R. McClintock, K. Mayes, A. Yasan, D. Shiell, P. Kung, and M. Razeghi. 2005. 320x256 solar-blind focal plane arrays based on AlxGa1–xN Applied Physics Letter, 86(1):011117.

22

Yuan YongGang, Zhang Yan, Chu KaiHui, Li XiangYang, Zhao DeGang, and Yang Hui. 2008. Development of solar-blind AlGaN 128x128 ultraviolet focal plane arrays. Science in China Series E: Technological Sciences 51(6):820.

23

Yongang Yuan, Yan Zhang, Dafu Liu, Kaihui Chu, Ling Wang, and Xiangyang Li. 2009. Performance of 128×128 solar-blind AlGaN ultraviolet focal plane arrays. Proceedings of the SPIE, 7381:73810I.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
FIGURE 2-4 A depiction of the CCD and CMOS imager systems. The photoresponsive elements are the CCD sensor and the CMOS imager, respectively. Available at http://www.siliconimaging.com/ARTICLES/CMOS%20PRIMER.htm#imagesensors. Accessed March 29, 2010.

FIGURE 2-4 A depiction of the CCD and CMOS imager systems. The photoresponsive elements are the CCD sensor and the CMOS imager, respectively. Available at http://www.siliconimaging.com/ARTICLES/CMOS%20PRIMER.htm#imagesensors. Accessed March 29, 2010.

effective dedicated process lines. The reality has proven different from these early and simplistic predictions.24

CCDs have continued to dominate the market for most high-performance imaging applications. Secondly, CMOS processes used for imagers have increasingly become specialized, enabling the higher performance needed to compete with CCD imagers. CCD processes have also become more complex and CCD-CMOS processes have emerged that allow CCD imagers to be monolithically integrated with CMOS electronics used for control, analog-to-digital conversion, and other on-chip processing.25,26 To understand some of the performance issues, a short description of some key features of CCD and CMOS technologies is provided here.

24

S. Paurazas, J. Geist, F. Pink, M. Hoen, and H. Steiman. 2000. Comparison of diagnostic accuracy of digital imaging by using CCD and CMOS-APS sensors with E-speed film in the detection of periapical bony lesions. Oral Surgery, Oral Medicine, Oral Pathology, Oral Radiology and Endodontology 89(3):356-362.

25

Craig L. Keast and Charles G. Sodini. 1993. A CCD/CMOS-based imager with integrated focal plane signal processing. IEEE Journal of Solid-State Circuits 28(4):431-438.

26

V. Suntharalingam, B.E. Burke, J.A. Burns, M.J. Cooper, and C. L. Keast. 2000. Merged CCD/SOI-CMOS technology. Proc SPIE 3965:246-253.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Charge-coupled Device Imagers

CCD imagers typically collect and store charge (photoelectrons or holes) generated by incident light under collection electrodes and then sequentially shift the stored charge packets to a readout amplifier to produce a time-dependent output signal containing the image information. To isolate charge packets in adjacent pixels from one another, multiple electrode phases are used.27

Typically either three or four electrode phases are employed, which has generally resulted in a need for two, three, or even four layers of polysilicon gate material to define the clock phases needed for the electrodes; this is in contrast to most standard CMOS processes, which employ one only layer of polysilicon to provide the gates for the nMOS and pMOS transistors.

In a CCD, charge packets are swept from pixel to pixel along a CCD register by varying the potential applied to each clock phase. Since several thousand transfers may be required to reach the edge of the chip, the charge transfer efficiency must be kept quite large, with losses of only 10−5 to 10−6 per transfer, both to ensure that the amplitude of the charge packet is preserved during transfer and to prevent charge smearing during readout. Although a number of factors can cause charge loss or trapping during transfers, one important factor is to ensure that charge transfer is not blocked by potential barriers between phase electrodes.

To help minimize the formation of potential barriers between phases, it is advantageous to have very small gaps (less than 100-200 nm) between electrodes defining adjacent clock phases. A decade ago, defining such small gaps using lithography was not practical, especially given the requirement for high yields (no shorts allowed between phases) and the length of the region that must remain defect free (on the order of 10 cm or more for a large-format CCD.) The alternative is to define the electrodes for one clock phase in a layer of polysilicon, thermally oxidize that polysilicon, and then cover it with another layer of polysilicon in which the next clock phase is defined.28 Using this method, small gaps can be defined between clock phases with high yield and without using aggressively scaled lithography, but at the expense of a process that often uses three layers of polysilicon.

It is worth noting that the ability to avoid the need to pattern extremely small features in the CCD process has allowed the use of older-generation lithography systems such as 1:1 reduction scanning slit lithography systems (for example, the Perkin-Elmer/SVGL Micralign systems). Since these systems can print a field as large as a full 150 mm diameter wafer, very large format CCD imagers can be de-

27

See, for example, James R. Janesick. 2001. Scientific Charge-Coupled Devices. Bellingham, WA: SPIE-The Society of Photo-Optical Instrumentation Engineers, or J.D.E. Beynon and D.R. Lamb. 1980. Charged-Coupled Devices and Their Applications. McGrawHill.

28

See, for example James R. Janesick. 2001. Scientific Charge-Coupled Devices. Bellingham, WA: SPIE-The Society of Photo-Optical Instrumentation Engineers, or J.D.E. Beynon and D.R. Lamb. 1980. Charged-Coupled Devices and Their Applications. London; New York: McGrawHill.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×

fined cost-effectively and without field stitching techniques, but with the limitation that printed features sizes must generally be on the order of 1 μm or larger. This should be contrasted to the lithography techniques needed for CMOS imagers employing aggressive transistor scaling (say 180 nm or below), where devices are usually patterned using high numerical aperture deep UV steppers or step-and-scan systems. Because of field size limitations of those lithography systems, CMOS imager chip sizes must currently be limited to standard lithographic field sizes of less than 33 × 22 mm, unless field stitching methods are used.

There are both advantages and disadvantages of shifting charge packets across a CCD imager to move charge to the output amplifier.29 One advantage is that the very high fidelity of the charge transfer process, combined with the simple pixel design (a capacitor), results in very low fixed pattern noise.30 A second advantage is that highly optimized readout amplifiers can be implemented, enabling very low readout noise, and it is also straightforward to implement correlated double sampling to eliminate the kTC noise associated with the gate capacitance of the readout circuit. To date scientific users such as astronomers, requiring the lowest possible noise levels, continue to use CCD imagers over CMOS imagers. Another benefit of being able to move charge packets from pixel to pixel is that charge packets can be moved during integration of the image, to allow a charge packet to “follow” a moving image. This feature is typically exploited for time delay-and-integrate (TDI) imagers that move the charge packets at a steady rate in one direction to precisely match the motion of an image slewing across the focal plane.

Applications for TDI imagers range from machine vision applications such as imaging parts in motion on a high-speed conveyor belt, to airborne imaging systems where large swaths of data are imaged in “pushbroom” fashion. A more recent development is a two-dimensional TDI capability available from the orthogonal transfer CCD (OTCCD), which allows charge shifting in both the x- and the y-axes, permitting left, right, up, or down shifting of the charge during integration.31,32,33 The OTCCD can provide electronic image stabilization during image integration,

29

Eric R. Fossum. 1991. Wire transfer of charge packets using a CCD-BBD structure for charge-domain signal processing. IEEE Transactions on Electron Devices 38(2):291-298.

30

Eric Fossum, Sunetra K. Mendis, Bedabrata Pain, Robert H. Nixon, and Zhimin Zhou. California Institute of Technology, assignee. February 1, 2000. Active pixel sensor having intra-pixel charge transfer with analog-to-digital converter. U.S. Patent 6,021,172.

31

B.E. Burke, R.K. Reich, E.D. Savoye, and J.L. Tonry. 1994. An orthogonal-transfer CCD imager. IEEE Transactions on Electron Devices 41(12):2482-2484.

32

John Tonry and Barry E. Burke. 1998. The orthogonal transfer CCD. Experimental Astronomy 8(1):77-87.

33

John Tonry, Barry Burke, and Paul Scheter. 1997. The orthogonal transfer CCD. Publications of the Astronomical Society of the Pacific 109:1154-1164.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×

allowing low light imaging with long integration times when images require stabilization to correct for atmospheric turbulence or platform jitter.

The need to shift charges over long distances in a CCD does have some disadvantages. For satellite applications, radiation damage of the silicon can result in defects that generate minority carriers under bias (bright spots) and trapping sites (which increase charge transfer inefficiency [CTI]), and shifting charge packets through a long path across the chip increases the probability that a charge packet will come into contact with one or more damage sites.34,35,36 Thus, large CCDs are susceptible to displacement damage, whereas CMOS sensors, where photocharge is read immediately at the pixel, are generally much more tolerant of radiation damage. Another disadvantage of shifting charge packets is the power dissipation associated with the repetitive clocking of the phases. Since the electrodes are capacitors, the energy dissipated per phase clock cycle is ~ CV2, where C is the capacitance of the electrodes attached to the phase being clocked and V is the voltage swing of the clock. Many scientific CCDs use relatively high voltages (say 10-15V) compared to the voltages used in CMOS imagers (1-3 V); this can contribute significantly to power requirements. However, it should be noted that CCDs have been designed and fabricated to operate at low voltages, so that the voltage difference is not fundamental; in some cases the choice of a higher voltage is driven by application requirements such as the need to obtain deep depletion or a higher full-well charge capacity, and not by some inherent limitation of CCD technology. It is also worth noting that resonant energy recovery techniques can be applied to reduce power consumption from CCD clocks.

Finally, it should be noted that CCDs can be, and have been, successfully integrated monolithically with CMOS transistors, allowing on-chip control, analog-to-digital conversion, and processing functions. For cases where the cost of a custom CCD-CMOS process may be undesirable, clever three-dimensional packaging techniques provide an alternative way of placing separately fabricated CMOS on CCD chips.37

34

V.A.J. Van Lint. 1987. The physics of radiation damage in particle detectors. Nuclear Instrumentation Methods, Physics Research A253:453-459.

35

Albert J.P. Theuwissen. 2007. Influence of terrestrial cosmic rays on the reliability of CCD image sensors—Part 1: Experiments at room temperature. IEEE Transactions on Electron Devices 54(12):3260-3266.

36

Albert J.P. Theuwissen. 2008. Influence of terrestrial cosmic rays on the reliability of CCD image sensors—Part 2: Experiments at elevated temperature. IEEE Transactions on Electron Devices 55(9):2324-2328.

37

J.Y. Yang, A. Taddiken, and Y.C. Kao. 1991. Monolithic integration of GaAs LED array/Si CMOS LOGIC. Technical Digest for the Gallium Arsenide Integrated Circuit (GaAs IC) Symposium 301-304.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Complementary Metal Oxide–Semiconductor Imagers

The distinguishing feature of most CMOS imagers is that transistors are placed in each pixel, typically to allow resetting and readout of the detector within that pixel.38 Figure 2-4 shows some examples of possible pixel electronics. Designing a very simple pixel with a single transistor per pixel is certainly possible and is analogous to the design of a DRAM (dynamic random access memory) cell. In fact, early DRAMs were occasionally used as imagers by experimentalists. When the pixel’s transistor is switched on, any accumulated photocharge in the pixel is dumped onto a column line, allowing the pixel to be read out and reset to a reference voltage. When the switch is off, charge integrates on the pixel from photocurrent. This pixel design is not generally used for high-performance imaging, because the large column capacitance leads to low responsivity and high-input-referred noise.

More typical pixel designs employ three transistors.39,40 One transistor is used to reset the pixel to a reference voltage, the second provides a source follower to buffer the charge integrated on the pixel and drive a voltage onto a column line, and the third transistor provides a selection transistor that allows only that pixel to drive the column when the pixel’s row in the imager is selected for readout.

More complex pixels can include additional transistors to provide switches for snapshot shutters or correlated double sampling, or more sophisticated amplifiers such as capacitor transimpedance amplifiers (CTIAs).41,42 It is also possible to place analog-to-digital converters in each pixel, resulting in direct digital outputs from each pixel. Another very different pixel design could be the readout electronics needed for a Geiger-mode43 avalanche photodiode (APD) array, where digital outputs are generated when single photons are detected; depending on the readout design, a count of the number of detected photons could be kept in the pixel, or

38

Zeljko Ignjatovic, Yang Zhang, and Mark Bocko. 2008. CMOS image sensor readout employing in-pixel transistor current sensing. In Proceedings IEEE International Symposium on Circuits and Systems, May.

39

Bedabrata Pain, Thomas Cunningham, Shouleh Nikzad, Michael Hoenk, Todd Jones, Bruce Hancock, and Chris Wrigley. 2005. A back-illuminated megapixel CMOS image sensor. Jet Propulsion Laboratory, NASA.

40

Vyshnavi Suntharalingam, Dennis D. Rathman, Gregory Prigozhin, Steven Kissel, and Mark Bautz. 2007. Back-Illuminated three-dimensionally integrated CMOS image sensors for scientific applications. Proceedings of SPIE 6690:6690009-9.

41

X. Liu, B. Fowler, S. Onishi, P. Vu, D. D. Wen, H. Do, and S. Horn. 2005. CCD/CMOS hybrid FPA for low light level imaging. Proceedings of SPIE 5881:58810C.

42

Haluk Kulah and Tayfun Akin. 1999. A current mirroring integration based readout circuit for high performance infrared FPA applications. IEEE Transactions on Circuits and Systems—II: Analog and Digital Signal Processing 50(4):181-186.

43

In Ggeiger mode operation of an avalanche photodiode, the bias is sufficiently large that a single incident photon causes an uncontrolled discharge that is not self-limiting. Instead additional circuitry is supplied to remove the bias to reset the detector.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×

the time of arrival of the photon (time stamp) could be stored, as is desirable for LADAR applications.

CMOS pixel electronics can be integrated monolithically with silicon detectors, or detector arrays can be hybridized with CMOS readout integrated circuits (ICs).44 A very typical hybridization arrangement is to bump-bond (usually with indium bumps) a detector array on top of a silicon CMOS readout IC.45 One compelling reason for the hybridized arrangement is that detectors can potentially occupy 100 percent of the pixel area, since detector area does not compete with the transistors for real estate. Another great advantage of this technique is that the detector arrays can be fabricated in a different fabrication facility, using a different process, from the CMOS. Thus, special processes can be used to fabricate deep-depletion low-dark-current silicon p-i-n diodes, or higher-voltage devices such as APDs, without altering the CMOS foundry processes. Using bump-bonding can make small pixel pitches (below 10 μm) difficult, especially when high yield and 100 percent pixel operability are required. Recently three-dimensional integration processes using wafer bonding have been developed, and even disparate materials such as silicon and InP have been monolithically integrated, with pixel size down to 6 μm.46,47,48,49

A great advantage of monolithic CMOS imagers has been the ability to integrate a complete imaging system, including pixel electronics, addressing and control circuitry, analog-to-digital conversion, and even some signal processing into a single chip that has relatively simple digital interfacing requirements and does not require the user to design analog readout electronics (which often have

44

A.G. Andreou, P.O. Pouliquen, and C.G. Rizk. 2009. Noise analysis and comparison of analog and digital readout integrated circuits for infrared focal plane arrays. Pp. 695-699 in Proceedings of the 43rd Annual Conference on Information Sciences and Systems (CISS09), Baltimore, Md., March 18-20.

45

Kun-Sik Park, Tae-Woo Kim, Yong-Sun Yoon, Jong-Moon Park, Jin-Yeong Kang, Jin-Gun Koo, Bo-Woo Kim, J. Kosonen, and Kwang-Soo No. 2007. Fabrication of a direct-type silicon pixel detector for a large area hybrid X-ray imaging device. IEEE Nuclear Science Symposium Conference Record M18-194-M18-197.

46

S. Das, A. Chandrakasan, and R. Reif. 2003. Design tools for 3-D integrated circuits. Pp. 53-56 in IEEE Proceedings of the Asia and South Pacific Design Automation Conference.

47

K. Banerjee, S. Souri, P. Kapur, and K. Saraswat. 2001. 3D ICs: A novel chip design for improving deep-submicrometer interconnect performance and systems-on-chip integration. Proceedings of IEEE 89(5):602-633.

48

Steven E. Steen, Douglas LaTulipe, Anna W. Topol, David J. Frank, Kevin Belote, and Dominick Posillico. 2007. Overlay as the key to drive wafer scale 3D integration. Microelectronic Engineering 84(5-8):1412-1415.

49

P. Leduc, F. de Crecy, M. Fayolle, B. Charlet, T. Enot, M. Zussy, B. Jones, J.-C. Barbe, N. Kernevez, N. Sillon, S. Maitrejean, and D. Louisa. 2007. Challenges for 3D IC integration: bonding quality and thermal management. Pp. 210-212 in Proceedings of the IEEE International Interconnect Technology Conference.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×

high standby power requirements).50 A principal disadvantage is that the CMOS support electronics consume real estate on the chip. In the case of the per-pixel electronics, the area available in the pixel for the detector is reduced, so fill factor is often limited to 30 to 60 percent. This limits low-light performance of the devices, unless microlenses are used to improve the fill factor. Unfortunately microlenses are less effective when used in low-F/# (F-number) imaging systems and may not be appropriate for all applications. The presence of other electronics outside the pixel, such as banks of analog-to-digital converters, can also be an issue if the chip is going to be used in a four-side abutted (tiled) arrangement as part of a large mosaic focal plane. In some cases, as much as 50 percent of the die area may not be used for the actual image sensing, making it more difficult to array the chips without losing large parts of the imaging field, throwing away light, or having duplicate imaging systems (optics.)

The ability to pack a number of transistors into a small area of a pixel is improved if more deeply scaled CMOS processes are used.51,52 However, more deeply scaled processes, especially those optimized for digital applications, often have limitations that can adversely affect imager performance. Maximum voltages are more limited, often to 1 to 2 V for deeply scaled (45 to 180 nm) processes, which reduces the dynamic range of the imager. If transistors are fabricated on thin epitaxial layers, or if the doping levels in the substrate are increased, the thickness of the silicon region able to collect photoelectrons is decreased, greatly limiting the red and VNIR responsivity. Microlens arrays may be needed to focus incident light onto small photosites (poor inherent fill factor), which may preclude effective use of the device with low-F/# optics.53 In some cases, back-side illumination is now being used on CMOS devices to improve the fill factor and the spectral response, but the increased complexity makes these devices increasingly similar to the CCDs made by “specialized processes.”54,55

50

Fayçal Saffih and Richard Hornsey. 2007. Reduced human perception of FPN noise of the pyramidal readout CMOS image sensor. IEEE Transactions on Circuits and Systems for Video Technology 17(7):924-930.

51

Michael Aquilino. 2006. Development of a Deep-Submicron CMOS Process for Fabrication of High Performance 0.25 mm Transistors. M.S. thesis, Rochester Institute of Technology, Rochester, NY. Available at https://ritdml.rit.edu/bitstream/handle/1850/5193/2006_Michael_Aquilino.pdf?sequence=1. Last accessed March 29, 2010.

52

L. Wilson ed. 1997. The National Technology Roadmap for Semiconductors. San Jose, Calif.: Semiconductor Industry Association.

53

E.A. Watson, W.E. Whitaker, C.D. Brewer, and S.R. Harris. 2002. Implementing optical phased array beam steering with cascaded microlens arrays. Proceedings of IEEE Aerospace Conference 3: 1429-1436.

54

Tommy A. Kwa, Pasqualina M. Sarro, and Reinoud F. Wolffenbuttel. 1997. Backside-illuminated silicon photodiode array for an integrated spectrometer. IEEE Transactions on Electron Devices 44(5):761-765.

55

A.G. Golenkov, F.F. Sizov, Z.F. Tsybrii, and L.A. Darchuk. 2006. Spectral sensitivity dependencies of backside illuminated planar HgCdTe photodiodes. Infrared Physics and Technology 47(3):213-219.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×

It should be noted that CMOS imagers shine in certain applications and that it is relatively straightforward to implement region-of-interest readout, or even random accessing of pixels, which can be very valuable for certain tracking functions. Moreover, fast shuttering is easier to implement in these imagers than in their CCD counterparts.

Because CMOS imagers are often targeted at low-cost applications and because deeply scaled CMOS processes are comparatively expensive on a cost-per-square-centimeter basis, CMOS imager manufacturers are incentivized to deliver the maximum number of pixels in the smallest possible die area. This is especially true for very cost-sensitive markets such as cell phone cameras, which are also the high-volume drivers for the market but have small profit margins. This has led to a trend to very small pixel sizes, which keeps the chip costs low and also allows chips with multiple megapixels to fit within lithographic stepper field sizes and to be produced with high yield. Unfortunately, these pixel sizes may not be well matched to certain optical systems and may lead to unfavorable trade-offs in dynamic range, low light sensitivity, and other parameters.

Avalanche Photodiodes

Avalanche photodiodes, mentioned briefly above, are highly sensitive semiconductor electronic devices that use amplification by avalanche processes to enhance the sensitivity for low light levels (see Figure 2-5). The ideal APD would be low cost and would have background-limited dark noise, broad spectral and frequency response, no excess noise, and a gain that ranges from 1 to 106 or more.

APDs achieve gain by accelerating either photoexcited carrier to energies above the bandgap where it can create an additional electron-hole pair by an inverse Auger process.

One specific example of an APD is characterized by a large active area (~180 μm), wide depletion region (~30 μm), a photon detection probability in excess of 50 percent, and a good timing response that is less than 300 ps.56 Among the disadvantages of this device is that it has a high 300 V breakdown voltage, which is not compatible with silicon electronics; in addition, the process required to thin the device structure to allow the depletion region to reach through the entire device is proprietary.

The shortcomings associated with the reach-through APD were addressed with

56

Don Phelan and Alan P. Morrison. 2008. Geiger-mode avalanche photodiodes for high time resolution astrophysics. Pp. 291-310 in High Timing Resolution Astrophysics, Don Phelan, Oliver Ryan, and Andrew Shearer, eds. New York: Springer.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
FIGURE 2-5 Schematic (a) and SEM (b) cross sections of a germanium/silicon APD. Doping concentrations and layer thicknesses were confirmed by secondary ion mass spectrometry (SIMS). The floating guard ring design labeled GR in cross section (a) was used to prevent premature breakdown along the device perimeter. ARC, anti-reflection coating. (Reprinted by permission from Macmillan Publishers Ltd: Nature Photonics. Yimin Kang, Han-Din Liu, Mike Morse, Mario J. Paniccia, Moshe Zadka, Stas Litski, Gadi Sarid, Alexandre Pauchard, Ying-Hao Kuo, Hui-Wen Chen, Wissem Sfar Zaoui, John E. Bowers, Andreas Beling, Dion C. McIntosh, Xiaoguang Zheng, and Joe C. Campbell. 2009. Monolithic germanium/silicon avalanche photodiodes with 340 GHz gain–bandwidth product. Nature Photonics 3:59-63. Copyright 2008.) These devices are widely deployed in long-wavelength and high-bit-rate optical transmission systems (see J.C. Campbell and H. Nie. 2000. High speed, low noise avalanche photodiodes. Proceedings of the Device and Research Conference 23:458-461).

FIGURE 2-5 Schematic (a) and SEM (b) cross sections of a germanium/silicon APD. Doping concentrations and layer thicknesses were confirmed by secondary ion mass spectrometry (SIMS). The floating guard ring design labeled GR in cross section (a) was used to prevent premature breakdown along the device perimeter. ARC, anti-reflection coating. (Reprinted by permission from Macmillan Publishers Ltd: Nature Photonics. Yimin Kang, Han-Din Liu, Mike Morse, Mario J. Paniccia, Moshe Zadka, Stas Litski, Gadi Sarid, Alexandre Pauchard, Ying-Hao Kuo, Hui-Wen Chen, Wissem Sfar Zaoui, John E. Bowers, Andreas Beling, Dion C. McIntosh, Xiaoguang Zheng, and Joe C. Campbell. 2009. Monolithic germanium/silicon avalanche photodiodes with 340 GHz gain–bandwidth product. Nature Photonics 3:59-63. Copyright 2008.) These devices are widely deployed in long-wavelength and high-bit-rate optical transmission systems (see J.C. Campbell and H. Nie. 2000. High speed, low noise avalanche photodiodes. Proceedings of the Device and Research Conference 23:458-461).

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×

the development of the Geiger mode avalanche photodiode (GM-APD).57,58,59,60 In fact the steps used to fabricate this device are compatible with CMOS processing. The disadvantages, however, are that these devices have a limited active area on the order of approximately 50 μm. As the active area diameter increases, there is a rapid increase in dark count noise caused by the presence of process-induced defects, which act as carrier generation centers within the device.61 In addition, this APD has a reduced quantum efficiency due to the smaller detection volume that is limited by the depletion region whose width is typically less than 1 μm.

In considering the schematic cross section for typical APDs a few basic structural elements are observed; these include an absorption region and a multiplication region. In the presence of light, an electric field that separates the photo-generated holes and electrons is present across the absorption region.62 This field sweeps one carrier toward the multiplication region, which is designed to support a large electric field to provide internal photo-current gain by impact ionization.

The APD gain region must be wide enough to provide a gain of at least 100 for silicon APDs or 10-40 for germanium or InGaAs APDs. In addition to this, it is expected that the multiplying electric field profile must enable an effective gain at a field strength below the breakdown field of the diode.

If the reverse bias voltage is less than the breakdown voltage, the avalanche dies down due to losses. When this happens a single photon will generate hundreds or even thousands of electrons. Above the breakdown voltage, the acceleration of the current carriers is great enough to keep the avalanche alive even without additional external stimulus. Thus, a single photon is sufficient to generate a constant current that can be measured using external electronic equipment. This current is calculated as

57

A.M. Moloney, A.P. Morrison, C.J. Jackson, A. Mathewson, and P.J. Murphy. 2002. Large-area geiger-mode avalanche photodiodes for short-haul plastic optical fiber communication. Proceedings of SPIE, Opto Ireland, Optoelectronic and Photonic Devices 4876:438-445.

58

A.M. Moloney, A.P. Morrison, J.C. Jackson, A. Mathewson, and P.J. Murphy. 2002. Geiger mode avalanche photodiode with CMOS transimpedance amplifier receiver for optical data link applications. IT&T Annual Conference, Transmission Technologies.

59

A.P. Morrison, V.S. Sinnis, A. Mathewson, F. Zappa, L. Variscoand M. Ghioni, and S. Cova. 1997. Single-photon avalanche detectors for low-light level imaging. Proceedings of SPIE, EUV, X-Ray, and Gamma-Ray Instrumentation for Astronomy VIII 3114:333-340.

60

In Geiger mode operation of an avalanche photodiode, the bias is sufficiently large that a single incident photon causes an uncontrolled discharge that is not self-limiting. Instead additional circuitry is supplied to remove the bias to reset the detector.

61

E.A. Dauler, P.I. Hopman, K.A. McIntosh, J.P. Donnelly, E.K. Duerr, R.J. Magliocco, L.J. Mahoney, K.M. Molvar, A. Napoleone, D.C. Oakley, and F.J. ODonnell. 2006. Scaling of dark count rate with active area in 1.06 μm photon-counting InGaAsP/InP avalanche photodiodes. Applied Physics Letters 89(11):111102.

62

Available at http://www.perkinelmer.com/. Last accessed March 29, 2010.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×

where R0(λ) is the spectral responsivity of the APD, M is the internal gain, and Ps (watts) is the incident optical power. The gain of the APD is dependent on the applied reverse bias voltage.

APDs are recommended for very high bandwidth applications or where internal gain is needed to overcome secondary amplifier noise. The devices are typically used for low-light detection, laser radar systems, optical data transmission, bar-code scanners, or biomedical equipment.63 They have found their way into military, medical, and communications applications and include positron emission tomography and particle physics.

APDs are photodetectors that provide first-stage gain through avalanche multiplication. APDs show an internal current gain effect of around 100 due to impact ionization (avalanche effect) and can employ doping techniques that allow greater voltage to be applied before breakdown is reached and therefore allow for a greater operating gain (>1,000).

Avalanche photodiode arrays generate digital outputs when a single photon is detected. These devices are capable of storing a count of the number of photons detected and the time of arrival of the photon on a single pixel. These photodiodes exhibit photoelectron gains of up to about 50, but issues associated with excess noise and nonuniformity have precluded widespread use of this phenomenon for FPAs employed in low-light-level applications. Attempts to move to longer wavelengths significantly decrease the yields due to the deviation from material lattice mismatch to the indium phosphide substrate, and the added defect densities contribute to excess dark current. This technology is being incorporated into higher-end short-wavelength infrared (SWIR) imaging and hyperspectral sensors but is currently too expensive for high-volume soldier sensor applications.

As is true of most detectors, the utility of APDs depends on many parameters such as quantum efficiency and total leakage current (the sum of the dark current, photocurrent, and noise). Knowledge of these parameters is important to fully characterize and efficiently operate avalanche photodiodes in Geiger mode.64

Thus, it is necessary to quantify the dark count rate, the excess reverse bias voltage, the optimum operating temperature, the photon detection probability, the after-pulsing probability, and the hold-off time. Because many of these parameters are interdependent, it is necessary to perform trade-offs between the variables to

63

More information on avalanche photodiodes is available at http://www.lasercomponents.com/fileadmin/user_upload/home/Datasheets/lc/applikationsreport/avalanche-photodiodes.pdf. Accessed March 29, 2010.

64

Geiger mode is an avalanche mode in which an unlimited avalanche occurs based on detection of a single photon or any number of photons. The avalanche is quenched by reducing the bias voltage.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×

achieve optimum performance for specific applications. For example, it is desirable to have a low dark count rate because this will maximize signal-to-noise ratio and minimize statistical uncertainty. A decrease of the dark count rate occurs exponentially with decreased temperatures; however, operating at low T increases the after-pulsing probability. Thus, changing one parameter, while it will improve performance in a specific area, will affect other parameters as well.65

Near Infrared

Silicon

Silicon sensors are sensitive throughout the visible to wavelengths as long as the silicon bandgap of ~1.1 μm. Just the opposite of the UV situation discussed above, the very long absorption length associated with the indirect bandgap of silicon requires very different optimization of the device structure for quantum efficiency and carrier collection.

Intensifiers

In many fields, it is common to use image intensifiers in front of a camera tube because they permit cameras to work at the lowest light levels possible. These are electron optic systems that are made up of an input phosphor-photocathode screen that converts incoming radiation into a beam of electrons, electrodes to control the movement of electrons, and an output phosphor screen that produces the output image.66 They convert spectral radiation to a visible light image, which after additional processing can be displayed on a monitor. Most commercially available image intensifiers have axial symmetry; however, some nonaxisymmetrial intensifiers have recently been designed.67 Intensifiers work utilizing an avalanche or Geiger mode gain in back of a photocathode. Thus, extremely small photon fluxes are multiplied several thousandfold, allowing viewing under extremely low light conditions.

65

B.S. Robinson, D.O. Caplan, M.L. Stevens, R.J. Barron, E.A. Dauler, S.A.Hamilton, K.A. McIntosh, J.P. Donnelly, E.K. Duerr, and S. Verghese 2005. High-sensitivity photon-counting communications using Geiger-mode avalanche photodiodes. Proceedings of the IEEE Lasers and Electro-Optics Society 559-560.

66

K.G. Vosburgh, R.K. Swank, and J.M.J. Houston. 1997. X-ray image intensifiers. Advances in Electronics and Electron Physics 43:205-244.

67

N.W. Adamiak, J. Dabrowski, and A. Fenster. 1996. Design of nonaxisymmetrical image intensifiers. IEEE Transactions on Industry Applications 32(1):93-99.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×

Short-wavelength Infrared

InGaAs detector technology is quite well developed as a result of its dominant use in fiber-optic telecommunications at ~1.3 to 1.7 μm. By varying the composition, the bandgap can be shifted to as long as 2.6 μm.

Mid-, Long-, and Very Long Wavelength Infrared

Brief History of Infrared Detection

Thallium sulfide and lead sulfide (or galena) were among the first infrared detector materials, developed during the 1930s. Many other materials have been investigated for application to infrared detection. Lead-salt detectors are polycrystalline and are produced using vacuum evaporation and a chemical deposition process from solution followed by post-growth sensitization.68 Reproducibility has historically been poor, but well defined, although somewhat empirical recipes were eventually found.

Significant improvement in detector manufacture occurred with the discovery of the transistor, which stimulated growth and material purification techniques. This resulted in novel techniques for detector production from single crystals.

High-performance detectors were initially based on the use of germanium with the introduction of controlled impurities. Development of high-performance visible and NIR detectors based on silicon began to occur in the 1970s after the invention of the CCD. This resulted in the development of sophisticated readout schemes that allowed both detection and readout to occur on one common silicon chip.

In the 1950s, there was extensive investigation of III-V semiconductors. As a result of its small bandgap (5 μm at 77 K), InSb showed promise as a material for MWIR detection, and indeed vastly improved InSb FPA arrays remain a mainstay of cooled MWIR imaging.

Shortly thereafter, in 1959, HgCdTe (mercury cadmium telluride, or MCT) was found to exhibit semiconducting properties over much of its composition range. The alloy’s bandgap was variable from 0.0 to 1.605 eV. Later, long-wavelength photoconductivity was demonstrated in HgCdTe, leading the way to development of infrared detectors.

A shift occurred in the mid-1960s toward using the PbSnTe alloy because of production and storage problems associated with HgCdTe. However, limitations in the speed of response for PbSnTe detectors and the better suitability of HgCdTe for

68

A. Rogalski, and J. Pitrowski. 1988. Intrinsic infrared detectors. Progress in Quantum Electronics 12:287-289.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×

infrared imaging device production, as well as improvements in the technology of the material, once again shifted the focus back to HgCdTe at the beginning of the 1970s. In the mid- to late 1980s, HgCdTe remained the most promising narrow-gap semiconductor for infrared detector arrays. Today, after many years of intensive development, photovoltaic HgCdTe is widely used across all infrared bands for high-performance IR FPAs.

Indium Antimonide

InSb MWIR detectors have been developed continuously since the 1950s and are a quite mature technology. InSb has a bandgap of about 5.4 μm at 77 K, making this material a good choice for MWIR detection. InSb detectors are based on bulk materials rather than epitaxy, and processing involves impurity diffusion or ion implantation. Relatively large wafers, ~3 to 4 inches (100 mm), are available.

Mercury Cadmium Telluride

HgCdTe is a ternary compound whose bandgap can be adjusted by varying the relative proportions of mercury and cadmium. HgCdTe is a pseudobinary alloy between HgTe and CdTe, written as Hg1–xCdxTe. The composition range 0.21 < x < 0.26 covers the LWIR regime. Nearly all of today’s LWIR HgCdTe is grown epitaxially in thin layers by molecular beam epitaxy (MBE) or liquid-phase epitaxy (LPE). Both p-type and n-type doping can be reproducibly accomplished. The composition can be varied during growth, allowing the formation of heterojunctions, barriers, and multiband devices. The most commonly used substrate is CdZnTe, but there is extensive work on using both silicon and GaAs substrates, although the large lattice mismatch has resulted in slow progress. Development of photovoltaic arrays began more than 30 years ago, and a high level of technology readiness has been achieved.

Recently, Tennant presented an empirical result, known as Rule 07 (the 07 refers to the year this result was obtained and was used to stress its transient status),69 that provides a characterization of dark current as a function of bandgap and temperature across a wide range of HgCdTe materials. Figure 2-6 presents the measured and semiempirical model. Tennant revisited this characterization and found that it remains a reliable guide.70 This rule is relevant to high-quantum-efficiency devices,

69

W.E. Tennant, D. Lee, M. Zandian, E. Piquette, and M. Carmody. 2008. MBE HgCdTe technology: a very general solution to IR detection, described by “Rule 07,” a very convenient heuristic. Journal of Electronic Materials 37(9):1406-1410.

70

W.E. Tennant. 2010. “Rule 07” revisited: still a good heuristic predictor of p/n HgCdTe photodiode performance? Journal of Electronic Materials DOI:10.1007/s11664-010-1084-9.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
FIGURE 2-6 Dark current density for HgCdTe as a function of the cutoff wavelength × temperature product. NOTE: TIS = Teledyne Imaging Sensors. SOURCE: W.E. Tennant, 2010. “Rule 07” revisited: still a good performance heuristic predictor of p/n HgCdTe photodiode performance. Journal of Electronic Materials. DOI: 10.1007/s11664-010-1084-9.

FIGURE 2-6 Dark current density for HgCdTe as a function of the cutoff wavelength × temperature product. NOTE: TIS = Teledyne Imaging Sensors. SOURCE: W.E. Tennant, 2010. “Rule 07” revisited: still a good performance heuristic predictor of p/n HgCdTe photodiode performance. Journal of Electronic Materials. DOI: 10.1007/s11664-010-1084-9.

and the analysis suggests that Auger recombination (Auger 1) in the n-type HgCdTe is the dominant limiting mechanism.71 This latest paper also compared both the theoretical and the experimental results for various strain-layer superlattice (SLS) structures against the HgCdTe results. The best SLS results are approaching the Rule 07 limits, while the theoretical work shows that significant improvement remains possible pending improvements in materials quality and processing.

Tennant then went on to compare this current density with the background dark current for a 4π steradian background at the operating temperature of the device (e.g., surrounded by a cold shield). This result is shown in Figure 2-7. For 77 K operation and a MWIR cutoff, the device is close (within a factor of ~5) to this background limit, while for both SWIR and LWIR operation, the dark currents are substantially above the BLIP limit for this very low background situation.

71

Auger processes in semiconductors are three-body interactions in which an electron-hole pair recombines without emission of a photon but rather with excitation of a second carrier to a higher-energy state. Auger processes are essentially the inverse of impact ionization in which an energetic carrier relaxes by creating an electron-hole pair.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×

Tennant ascribes the LWIR result to Auger recombination processes, while at short wavelengths the increased noise results from a deviation between the optical and electrical bandgaps. Substantial efforts have already been made to reduce the Auger recombination, and it does not appear likely that much further improvement is available in present material and device configurations. Thus, the low-temperature “external radiative limit” remains an elusive goal that does not appear approachable within the constraints of current technology.

It is important to recognize that the BLIP current for LWIR tactical applications, looking at a 300 K background with F/1 optics, is much higher than this “external radiative” limit. At 77 K operation temperature and with a 12 μm LWIR cutoff, the dark current is ~ 10−4 A/cm2 (see Figure 2-7). From the blackbody irradiance (300 K, F/1 optics), the dark current is ~ 0.18 A/cm2, orders of magnitude greater than the intrinsic device dark current.

FINDING 2-3

MWIR and LWIR detectors are already close to fundamental BLIP for terrestrial operations that look at a 300 K background. Future innovations will focus on device and system optimization for specific applications.

FIGURE 2-7 HgCdTe dark current from Rule 07 relative to the external radiative limit, corresponding to a cold shield at the device temperature. At MWIR there is relatively little room for improvement at the lowest temperatures; however at other wavelengths there is substantial excess dark current, which limits the detector performance for low-background situations. SOURCE: W.E. Tennant, D. Lee, M. Zandian, E. Piquette, and M. Carmody. 2008. MBE HgCdTe technology: a very general solution to IR detection, described by “Rule 07,” a very convenient heuristic. Journal of Electronic Materials 37(9):1406-1410.

FIGURE 2-7 HgCdTe dark current from Rule 07 relative to the external radiative limit, corresponding to a cold shield at the device temperature. At MWIR there is relatively little room for improvement at the lowest temperatures; however at other wavelengths there is substantial excess dark current, which limits the detector performance for low-background situations. SOURCE: W.E. Tennant, D. Lee, M. Zandian, E. Piquette, and M. Carmody. 2008. MBE HgCdTe technology: a very general solution to IR detection, described by “Rule 07,” a very convenient heuristic. Journal of Electronic Materials 37(9):1406-1410.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Strained-layer Superlattice

Strained-layer superlattice material consists of alternating thin layers of InAs and GaSb. A typical LWIR example of the superlattice structure has a period consisting of 4.4 nm of InAs and 2.1 nm of GaSb. This pair is repeated 300 times or more to form the IR-absorbing region. Because of the type II band offset between the two constituent materials, in which the conduction band of InAs is below the valence band of GaSb, the structure exhibits new bands for holes and electrons, which are separated by an energy difference that is smaller than either of the bandgaps of the InAs or GaSb themselves and is adjustable by varying the thicknesses of the layers. This small effective bandgap is suitable for absorbing IR photons. The structure is grown by MBE and the commonly used substrate is GaSb. Both n- and p-type doping have been demonstrated. Heterojunctions are typically formed by growing contacting regions adjacent to the absorbing region, having a shorter superlattice period and a wider effective bandgap than the absorber. SLS materials are based on very well developed III-V materials, and the vast experience in bandgap engineering in these and related systems holds promise for continuing developments. Several variants have been and continue to be introduced including “W” and “M” structures.72,73 This remains an active research area with significant potential for dramatic advances.

Additionally, the basic SLS structure can be modified by inserting a very thin (a few angstroms) layer of AlSb as a barrier for the majority carrier electrons. This opens up a wide parameter space for bandgap engineering, to enable specialized barriers as well as various heterojunction designs. Devices incorporating these structures have been grown and tested recently with the aim of reducing the dark current.74

The SLS LWIR technology development effort has received substantial funding recently because of its potential as a future, low-cost, III-V compatible replacement for HgCdTe in some applications. Although progress has been made the device performance, the dark currents remain significantly greater than those of HgCdTe as shown in Figure 2-8. These are laboratory studies, the technology readiness level of SLS material is well behind that of HgCdTe.

72

B.M. Nguyen, D. Hoffman, P.Y. Delaunay, and M. Razeghi. 2007. Dark current suppression in type II InAs/GaSb superlattice long wavelength infrared photodiodes with M-structure barrier. Applied Physics Letters 91:63511-1.

73

E.H. Aifer, J.G. Tischler, J.H. Warner, I. Vurgaftman, W.W. Bewley, J.R. Meyer, J.C. Kim, L.J. Whitman, C.L. Canedy, and E.M. Jackson. 2006. W-structured type-II superlattice long-wave infrared photodiodes with high quantum efficiency. Applied Physics Letters 89:053519-1.

74

D.Z. Ting, C.J. Hill, A. Soibel, S.A. Keo, J.M. Mumolo, J. Nguyen, and S.D. Gunapala. 2009. A high-performance long wavelength superlattice complementary barrier infrared detector. Applied Physics Letters 95:023508.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
FIGURE 2-8 Comparison of theoretical (inside circled region) dark currents of SLS devices with the HgCdTe Rule 07 metric (solid line). The experimental dark currents are above those achieved in HgCdTe, while the theory shows a potential advantage of SLS pending better materials and device processes. SOURCE: Tennant, W.E., D. Lee, M. Zandian, E. Piquette, and M. Carmody. 2008. MBE HgCdTe technology: a very general solution to IR detection, described by “Rule 07,” a very convenient heuristic. “Rule 07” revisited: still a good performance heuristic predictor of p/n HgCdTe photodiode performance. Journal of Electronic Materials 37(9):1406-1410 DOI: 10.1007/s11664-010-1084-9.

FIGURE 2-8 Comparison of theoretical (inside circled region) dark currents of SLS devices with the HgCdTe Rule 07 metric (solid line). The experimental dark currents are above those achieved in HgCdTe, while the theory shows a potential advantage of SLS pending better materials and device processes. SOURCE: Tennant, W.E., D. Lee, M. Zandian, E. Piquette, and M. Carmody. 2008. MBE HgCdTe technology: a very general solution to IR detection, described by “Rule 07,” a very convenient heuristic. “Rule 07” revisited: still a good performance heuristic predictor of p/n HgCdTe photodiode performance. Journal of Electronic Materials 37(9):1406-1410 DOI: 10.1007/s11664-010-1084-9.

Quantum-well Infrared Photodetectors and Quantum-dot Infrared Photodetectors

Quantum-well infrared photodetectors (QWIPs) and quantum-dot infrared photodetectors (QDIPs) are unipolar photoconductive devices based on intraband absorption between electronic levels defined by quantum confinement in traditional III-V semiconductors, principally GaAs and InP. The promise is that the III-V growth and processing technology is quite mature, substrates are readily available, and scaling to large arrays should be simpler (and have higher yield) than for HgCdTe-based devices. The issues are related to the relatively weak absorption associated with the quantum confined structures.

For the case of intrasubband transitions in III-V QWs, selection rules forbid the absorption of normally incident light requiring a grating or other optical element to scatter the incident light into the QW plane. Due to the weak absorption,

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×

associated with the small fill factor of the QWs relative to the wavelength, this can lead to cross-talk issues and constraints on decreasing the pixel size. QDIPs, as a result of the three-dimensional confinement, eliminate this selection rule allowing normal incidence detection, but the absorption is still quite weak, about 2 to 3 percent for a single pass through a typical active layer, which results in poor quantum efficiency. This is somewhat alleviated in the detectivity by the low dark currents, which also depend on the total volume of quantum dots. Recently, there has been quite a bit of activity in adding nanostructures such as plasmonics to QDIPs; this is discussed more fully in Chapter 4. Rogolski has recently reviewed progress in both HgCdTe and QWIPs-QDIPs for FPAs.75,76,77

Very Long Wavelength Infrared

Many of the same detector technologies being developed for the LWIR also can be optimized for VLWIR operation beyond 12 μm. Historically, bulk doped semiconductors have dominated in this spectral region, which is potentially important for missile detection against a cold (space) background. Mercury cadmium telluride detectors suffer from increasing noise due to thermally generated carriers and Auger processes at these land wavelengths. Both type II superlattice and QDIP-QWIP detectors have shown promise for this spectral region. This remains an active area of investigation.

FINDING 2-4

Continued detector advancement requires improved growth and processing of low defect density compound semiconductor materials. The 30-year trend has been improvements in existing materials along with the incorporation of nanoscale structures in one, two, and three dimensions.

FABRICATION OF DETECTORS AND FOCAL PLANE ARRAYS

Detectors

Each material system brings its own unique set of fabrication issues to maintain high performance. Overall the dimensional scale of even visible pixels is large

75

A. Rogalski. 2006. Competitive technologies of third generation infrared photon detectors. Opto-Electronics Review 14(1):87-101.

76

P. Martyniuk and A. Rogalski. 2008. Quantum-dot infrared photodetectors: status andoutlook. Progress in Quantum Electronics 32(3-4):89.

77

P. Martyniuk, S. Krishna, and A. Rogalski. 2008. Assessment of quantum dot infrared photodetectors for high temperature operation. Journal of Applied Physics 104(3):034314-1.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×

compared to the minimum feature size of current lithographic tools (which are following a Moore’s law curve with current manufacturing at the 45 nm node).

Focal Plane Arrays

A focal plane array is created by arranging individual detector elements in a lattice-like array. Individual detectors in an array are often referred to as pixels, short for picture elements. However, the process of developing an integrated array of detectors is significantly more challenging than fabricating an individual detector element. The overall scheme of silicon-based visible detectors is discussed in the sections on CMOS and CCDs. As a result of the advanced state of silicon technology, these imaging chips integrate to some extent both the detection and the electronics. For infrared detectors, in contrast, the signals have to be moved from the detector material to silicon circuitry, called the readout integrated circuit (ROIC); this is usually accomplished by bonding each pixel to a silicon readout circuit using a myriad of indium bump bonds. The number of bonds scales as the number of pixels, and for very large arrays this is a difficult manufacturing step. Typically each pixel has one independent contact and shares the second contact with other pixels in the array. The distribution of the common contacts impacts electrical cross-talk and readout speed.

A fundamental limitation in the development of arrays of detectors is that light is easily coupled to neighboring pixels in an array, which leads to the development of false counts, or cross-talk.78 There are approaches that may be exercised to mitigate this limitation, but they add additional complexity to the manufacturing.79 In addition, the array fabrication process becomes even more complicated by the requirement to maintain low leakage current in the individual pixels, making the fabrication process even more unwieldy.80 The progress in arrays has been steady and has paralleled the development of dense electronic structures such as DRAMs.

78

Don Phelan and Alan P. Morrison. 2008. Geiger-mode avalanche photodiodes for high time resolution astrophysics. Pp. 291-310 in High Timing Resolution Astrophysics, Don Phelan, Oliver Ryan, and Andrew Shearer, eds. New York: Springer.

79

J. Ziegler, M. Bruder, M. Finck, R. Kruger, P. Menger, T. Simon, and R. Wollrab. 2002. Advanced sensor technologies for high performance infrared detectors. Infrared Physics and Technology 43(3-5):239-243.

80

Alexis Rochas, Alexandre R. Pauchard, Pierre-A. Besse, Dragan Pantic, Zoran Prijic, and Rade S. Popovic. 2002. Low-noise silicon avalanche photodiodes fabricated in conventional CMOS technologies. IEEE Transactions on Electron Devices 49:387-394.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×

Manufacturing Infrastructure

The manufacturing infrastructure for large array fabrication is discussed in Chapter 5. At this point it suffices to recognize that the manufacturing tools are largely those developed by the integrated circuit industry and adapted for FPA manufacturing. The FPA industry is not sufficiently large to support the development of a complete set of unique tools. As the evolution of the silicon industry is driven by a different set of goals, this can lead to divergence and to gaps in the FPA tool set. One simple example is that the silicon industry has standardized on a field size of 22 × 33 mm2 for its lithography tools. The drive to larger pixel counts for FPAs often requires much larger overall FPA sizes, which can only be accomplished by abutting multiple fields, requiring special considerations in the design of the focal plane arrays.

FINDING 2-5

An advanced equipment set is required for manufacturing large-pixel-count detector arrays. Equipment availability is dependent on leveraging silicon CMOS developments. The detector market is not in itself sufficiently large to drive equipment development.

CONCLUDING THOUGHTS

Detection and imaging of electromagnetic radiation across the UV, visible, and infrared spectrum has a long history. As a result of its very advanced stage of technological development, silicon is now, and undoubtedly will continue as, the dominant material for visible sensors. One exception is the need for solar blind detectors that are insensitive to the solar spectrum after it is filtered bypassing through the ozone layer surrounding the Earth. Large-bandgap materials such as AlGaN are being actively developed for this application. First-generation night vision systems used intensified (amplified) visible detection. Increased interest is now being placed on SWIR detection using InGaAs and related materials technology. Much of the progress at these longer wavelengths was catalyzed by the needs of the telecommunications industry for fiber-optic receivers. Infrared detectors have been under development for many years, primarily for military applications. The traditional material systems for cooled detectors are InSb for MWIR and HgCdTe for both MWIR and LWIR. Emerging material systems include SLS antimonides and intersubband transition QWIPs and QDIPs in the AlGaAs system. Both of these have the advantage of epitaxial growth on GaAs and possibly silicon substrates and of leveraging off of the mature GaAs technology developed for electronics and photonics. However, they are at a much earlier stage of development and technology readiness. Figure 2-9 shows the material systems relevant for different wavelength regimes.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
FIGURE 2-9 Material systems for UV-visible-infrared detection. Except for the bottom two entries, these material systems have been known and developed for decades. SOURCE: Presented to the committee by Dr. “Dutch” Stapelbroek, University of Arizona.

FIGURE 2-9 Material systems for UV-visible-infrared detection. Except for the bottom two entries, these material systems have been known and developed for decades. SOURCE: Presented to the committee by Dr. “Dutch” Stapelbroek, University of Arizona.

Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 23
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 24
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 25
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 26
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 27
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 28
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 29
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 30
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 31
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 32
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 33
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 34
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 35
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 36
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 37
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 38
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 39
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 40
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 41
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 42
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 43
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 44
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 45
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 46
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 47
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 48
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 49
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 50
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 51
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 52
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 53
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 54
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 55
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 56
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 57
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 58
Suggested Citation:"2 Fundamentals of Ultraviolet, Visible, and Infrared Detectors." National Research Council. 2010. Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays. Washington, DC: The National Academies Press. doi: 10.17226/12896.
×
Page 59
Next: 3 Key Current Technologies and Evolutionary Developments »
Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays Get This Book
×
Buy Paperback | $59.00 Buy Ebook | $47.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The Department of Defense recently highlighted intelligence, surveillance, and reconnaissance (ISR) capabilities as a top priority for U.S. warfighters. Contributions provided by ISR assets in the operational theaters in Iraq and Afghanistan have been widely documented in press reporting. While the United States continues to increase investments in ISR capabilities, other nations not friendly to the United States will continue to seek countermeasures to U.S. capabilities.

The Technology Warning Division of the Defense Intelligence Agency's (DIA) Defense Warning Office (DWO) has the critical responsibility, in collaborations with other components of the intelligence community (IC), for providing U.S. policymakers insight into technological developments that may impact future U.S. warfighting capabilities.

To this end, the IC requested that the National Research Council (NRC) investigate and report on key visible and infrared detector technologies, with potential military utility, that are likely to be developed in the next 10-15 years. This study is the eighth in a series sponsored by the DWO and executed under the auspices of the NRC TIGER (Technology Insight-Gauge, Evaluate, and Review) Standing Committee.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!