4

Active Electro-Optical Component Technologies

As has been described in Chapters 2 and 3, current and emerging active electro-optical (EO) sensing systems are implemented in many different modalities. All require components such as lasers, detectors, optics, and processing techniques to generate photons, bounce them off targets, and transform detected photons into usable information. Other components may be required as well, depending on the implementation. The specific requirements, complexity and sophistication of the components vary with the manner of implementation of the active EO sensing system and the usable information it is trying to extract. For example, no particular laser or detector technology meets all the requirements of the various active EO sensing approaches. An important factor in the last ten years’ progress has been technological advances in each of the components of the active EO system: improved lasers, detectors, software, advances in robotics, and improved manufacturing technologies. This chapter discusses the variety of components currently used and some of the key technologies being developed for future systems.

LASER SOURCES FOR IMAGING

Active EO sensors employ coherent sources in the wavelength region from the long-wavelength infrared (around 10 µm) to the atmospheric transmission limit for UV light, around 200 nm. The sources can be based either on lasers or on nonlinear optical systems driven by lasers. Lasers are typically categorized by the type and format of the medium used to generate their output, which at the highest level are gases, liquids, and solids.

Solid materials are further categorized by their electrical characteristics. Solid-state lasers employ insulating solids (crystals, ceramics, or glasses) with elements added (dopants) that provide the energy levels needed for laser action. Energy to excite the levels is provided by other sources of light, either conventional sources such as arc lamps or other lasers, a process called optical pumping.

Solid-state lasers in turn are divided into two broad categories, bulk or fiber, with the latter having recently emerged as an important technology for generation of high average powers with high beam quality, as discussed below.

Even though they are also made from solid-state materials, semiconductor lasers are considered to be a separate laser category. While the lasers can be made to operate by optical pumping, if the semiconductor material can be fabricated in the form of an appropriate p-doped, n-doped (PN) junction, it is possible to pass electrical current through the junction and generate laser output directly. These “diode lasers” are by far the most widely used form of semiconductor lasers, and have led to major advances in source technology for active EO sensors.

The use of lasers for active EO sensors started with the use of the first laser to be operated (1960), the solid-state ruby laser.1 With the development of techniques to generate nanosecond-duration pulses, ruby lasers provided the first example of laser rangefinders. Since 1960, almost every type of laser has been employed in demonstrations of active EO sensors. At this point, with a few exceptions, sensors now

____________________

1 T.H. Maiman, 1960, “Stimulated optical radiation in ruby” Nature 187(4736): 493.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 154
4 Active Electro-Optical Component Technologies As has been described in Chapters 2 and 3, current and emerging active electro-optical (EO) sensing systems are implemented in many different modalities. All require components such as lasers, detectors, optics, and processing techniques to generate photons, bounce them off targets, and transform detected photons into usable information. Other components may be required as well, depending on the implementation. The specific requirements, complexity and sophistication of the components vary with the manner of implementation of the active EO sensing system and the usable information it is trying to extract. For example, no particular laser or detector technology meets all the requirements of the various active EO sensing approaches. An important factor in the last ten years’ progress has been technological advances in each of the components of the active EO system: improved lasers, detectors, software, advances in robotics, and improved manufacturing technologies. This chapter discusses the variety of components currently used and some of the key technologies being developed for future systems. LASER SOURCES FOR IMAGING Active EO sensors employ coherent sources in the wavelength region from the long-wavelength infrared (around 10 µm) to the atmospheric transmission limit for UV light, around 200 nm. The sources can be based either on lasers or on nonlinear optical systems driven by lasers. Lasers are typically categorized by the type and format of the medium used to generate their output, which at the highest level are gases, liquids, and solids. Solid materials are further categorized by their electrical characteristics. Solid-state lasers employ insulating solids (crystals, ceramics, or glasses) with elements added (dopants) that provide the energy levels needed for laser action. Energy to excite the levels is provided by other sources of light, either conventional sources such as arc lamps or other lasers, a process called optical pumping. Solid-state lasers in turn are divided into two broad categories, bulk or fiber, with the latter having recently emerged as an important technology for generation of high average powers with high beam quality, as discussed below. Even though they are also made from solid-state materials, semiconductor lasers are considered to be a separate laser category. While the lasers can be made to operate by optical pumping, if the semiconductor material can be fabricated in the form of an appropriate p-doped, n-doped (PN) junction, it is possible to pass electrical current through the junction and generate laser output directly. These “diode lasers” are by far the most widely used form of semiconductor lasers, and have led to major advances in source technology for active EO sensors. The use of lasers for active EO sensors started with the use of the first laser to be operated (1960), the solid-state ruby laser. 1 With the development of techniques to generate nanosecond-duration pulses, ruby lasers provided the first example of laser rangefinders. Since 1960, almost every type of laser has been employed in demonstrations of active EO sensors. At this point, with a few exceptions, sensors now 1 T.H. Maiman, 1960, “Stimulated optical radiation in ruby” Nature 187(4736): 493. 154

OCR for page 154
ACTIVE ELECTRO-OPTICAL COMPONENT TECHNOLOGIES 155 employ either diode or solid-state lasers, with the latter often combined with nonlinear optics. This looks to be the case for the foreseeable future, owing to a favorable combination of output format, operating wavelength, relatively high efficiency, ruggedness, compact size, and reliability. With regard to nonlinear optics, most systems employ crystals with special properties that can convert a laser output to shorter wavelengths by harmonic conversion or to longer wavelengths, by parametric processes. The latter have the added advantage that the wavelengths generated can be tuned by a variety of techniques, an advantage for sensors that require specific wavelengths—for example, in the detection of specific gases. In some cases parametric and harmonic processes can be combined for added wavelength tuning. The next sections go into more detail regarding the dominant active EO sensor source technologies. A reasonable amount of space is devoted to diode lasers, because they now feature prominently as both stand-alone devices for low-cost, short-range sensors and as optical pumps for solid- state lasers used for long-range sensors. Diode Lasers There are two major types of diode lasers, interband and cascade. The former is by far the most widespread in use, but the recently developed cascade lasers have emerged as an important source of mid- and long-wave infrared emission. Diode Lasers: Interband, Edge-Emitting Interband diode lasers employ electronic transitions between energy states in the conduction band and the valence band of the semiconductor crystal. First and foremost, the semiconductor material used for the diode laser must have a direct optical bandgap—that is, the electronic transitions giving rise to laser operation must occur without the assistance of mechanical vibrations (phonons) in the semiconductor. This eliminates the most common semiconductor material, silicon, as well as the related material germanium. Laser operation in direct-bandgap semiconductors occurs by optical transitions between the lowest-lying electronic states in the conduction band to the highest-lying states in the valence band, with the requirement that there be a higher density of states occupied in the conduction band than in the valence band. This nonequilibrium condition, or “population inversion,” occurs in diode lasers through the injection of sufficient electrical current into the lasing region. Fabrication of a diode laser (Figure 4-1 for an edge-emitting device) requires that the semiconductor material be available in both n-type (electrons the majority current carrier) and p-type (holes the majority current carrier), with this achieved by the addition of certain impurities to the materials. While there are many semiconductors with direct bandgaps, many of these, for a variety of reasons, cannot be doped to make both n- and p-type materials to form diodes, notably all of the so-called II-VI binary semiconductors such as CdS, CdTe, ZnS, ZnO, and ZnSe. (The Roman numerals refer to the relative positions of the elements in the periodic table.) Infrared lasers have been operated based on IV-VI materials such as PbS and PbSe but require cryogenic cooling for efficient operation. To date, III-V binary, direct-bandgap semiconductors such as GaAs, GaSb, InAs, InSb and, most recently, GaN, as well as alloys of these crystals with other III-V elements, are by far the most widely used materials for use as diode lasers. Noncryogenic operation in the 350-2,000-nm region is possible with devices based on III-V materials.

OCR for page 154
156 LASER RADAR FIGURE 4-1 Reprinted with permission from The Photonics Handbook, online at http://www.PhotonicsHandbook.com. Copyright 2013. Laurin Publishing, Pittsfield, Mass. Figure 4-1, though a very simplified diagram, shows the key features, of an edge-emitting interband diode laser. Electrical current passes through a wire bond to a stripe top contact, which confines the current in two dimensions. In the device shown, the current passes (is “injected”) through p-type material into the “active layer,” fundamentally a junction between p- and n-type material, then through n- type material to an electrical contact to complete the circuit. Laser gain occurs in a thin, 3-D region comprising the active layer, with the third dimension set primarily by the width of the stripe contact. The injected current and the structure of the device act to form an optical waveguide that confines the laser light to the same region as the gain, both vertically (perpendicular to the plane of the junction) and horizontally (parallel to the junction plane.) Most edge-emitting diode lasers operate with the laser cavity formed by the cleaved ends of the waveguide region. One end is coated with a deposited stack of dielectrics to form a highly reflecting mirror, while the other end is coated to provide only a small amount of reflectivity. Typical lengths between the two faces are in the sub- to several-millimeter range. In some cases one diode end has an antireflection coating to enable use of an external mirror and/or tuning element to obtain more control of the diode wavelength. In reality, the details of the junction are much more complex than the simple PN junction implied by Figure 4-1. Major developments in diode-laser performance, from the first demonstrations of operation in the early 1960s, have been the result of developing the fabrication technology to make multilayer semiconductor structures (heterostructures) that better confine the lasing region and reduce the current needed to get laser operation, as well as increase the efficiency in converting electrical power to laser power. As the thickness of the layers has been reduced with improved processes to tens of nanometers, quantum effects that fundamentally change the nature of the semiconductor-material energy levels have been utilized to further improve performance. In addition to structuring in the plane of the junction, additional material structuring in the horizontal direction to confine the current and laser power has led to better laser properties. A critical and fundamental characteristic of a diode is the nature of the region where laser action occurs. Referring to Figure 4-1, in the vertical direction the region of laser emission is limited to a dimension of about 0.5 µm, set by the fundamental nature of the junction. This is comparable to or smaller than the wavelength of the light emitted, which assures that the light (“fast axis”) emitted in the vertical direction is diffraction limited, consisting of only a single transverse mode. In Figure 4-1 the beam dimension is larger in the vertical direction because the fast-axis light rapidly diverges from the

OCR for page 154
ACTIVE ELECTRO-OPTICAL COMPONENT TECHNOLOGIES 157 small emitting region. To effectively capture all of the power from the diode laser requires fast optics, at least for the fast-axis light, and this is accomplished with specialized aspheric optics, often fastened directly to the diode-laser package. The limiting feature of the small dimension is that even relatively low absolute power levels lead to very high power densities at the surface of the edge-emitting diode laser, on the order of 10 MW/cm2, and can lead to catastrophic optical destruction (COD) of the diode laser beyond certain power levels. COD results from the inevitable defects at the surfaces of semiconductors, which absorb the laser light, heat up, and, with enough power, melt the material at the surface. Means to improve the COD level are often considered proprietary or, at a minimum, patentable, by the diode manufacturers. For some semiconductor materials, particularly compounds designed to operate at wavelengths of 1,500 nm and longer, the diode power levels are limited by simple heating effects, well below the levels set by COD. No matter the cause of the limited power, one can increase the power output of an edge-emitting diode laser by increasing the width of the lasing region along the horizontal direction, which reduces the intensity at the surface for a given absolute power level and also the generated heat density. Unfortunately, beyond a dimension of several microns, the light emitted in the direction along the plane of the junction becomes multimode. State-of-the-art diode lasers are able to generate diffraction-limited continuous wave (CW) powers on the order of 0.1 W/µm of horizontal width, so that diode lasers with strictly single-mode outputs are capable of only 0.1-0.2 W of output power. The low power is adequate for many applications, and for optical data storage devices (CDs, DVDs and Blue-Ray discs) the yearly volume of low-power diode lasers approaches 1 billion. An upper limit to the power from a single emitting region is found when the laser action, rather than occurring through the cavity formed by the cleave ends, starts in a perpendicular direction along the width of the stripe. Typical junction widths for so-called broad-stripe lasers are in the 100-400µm range. State-of-the-art diode lasers operating in the 900-nm wavelength region can generate 10-15 W of CW power with a 100 µm stripe width. In order to produce diode lasers with higher average power, the semiconductor laser industry, starting in the 1980s, took advantage of improvements in material quality and lithographic techniques to manufacture multiple diode lasers on one piece (“chip”) of semiconductor material. Figure 4-2 is a schematic of a multi-emitter diode “bar,” and the most common overall horizontal dimension is 1 cm. The bar fabrication requires a means to suppress laser action along the length of the bar, and an additional manufacturing challenge is mounting the bar on a heat sink to remove the heat efficiently while also keeping the entire bar in one plane, so all of the emitters line up exactly. If they do not, the net divergence of all of the beams after collection by external optics increases. Key bar parameters are the number of emitters per bar and the stripe width of each emitter, leading to the “fill factor” for the bar, with typical stripe widths in the 100-200µm range. Unlike single-emitters, the power output of bars running in the CW mode is thermally limited, although recent improved thermal engineering of bar mounts and cooling may lead to COD starting to become a limit as well. The thermal limits to power can be overcome if the diodes are run in pulsed mode, sometimes referred to as quasi-CW (QCW). Typically, this mode is used for pumping solid-state lasers, with pulsewidths and on-time (duty cycle) percentages in the 0.1-1-msec and 1-10 percent ranges. Present state-of-the art bars, designed for CW operation, have fill factors of 30-50 percent, and for the most efficient semiconductor materials, operating in the 900-nm region (“9xx devices”), commercially available power levels are as high as 160 W. Higher powers are possible with reduced device lifetimes, which drop rapidly as the temperature of the semiconductor junction increases. QCW bars run with fill factors of 75-90 percent and peak power levels of 250-300 W for 9xx devices. Appendix C provides a number of tables that review the state of the art today in lasers. Table C-1 summarizes the key features of edge-emitting diode lasers, in both single-emitter and bar format. Given the significance of such devices for a number of commercial and military applications, there is continuing development of devices over the entire spectral range, and the performance (power, efficiency) and, to a lesser extent, the wavelength coverage can be expected to improve in the next 10-15 years.

OCR for page 154
158 LASER RADAR FIGURE 4-2 Diagram of “bar” lasers. SOURCE: © Jeniptik Laser GmbH. The specific configurations of edge-emitting diode lasers as optical pumps for solid-state lasers are discussed in a later section. Diode Lasers: Interband, Vertical Cavity Another class of interband diode lasers, developed more recently than the edge-emitting devices just discussed, is based on improved semiconductor-processing technology. Several designs of these new devices, vertical-cavity surface-emitting diode lasers (VCSELs), appear in Figure 4-3. The key feature is that the device is constructed by building up a series of semiconductor layers and that the direction of optical gain in the junction is perpendicular to the plane of the junction. The latter, since the path length is very short, requires much higher reflectivity mirrors in the laser cavity to permit oscillation, with the output coupling mirror having less than 1 percent transmission. The bottom and top mirrors (distributed Bragg reflectors (DBRs in Figure 4-3) consist of alternating layers of high- and low-index semiconductor materials, and the active region, employing quantum-well-structure semiconductor heterojunctions, is sandwiched between the DBRs. Electrical contacts on the top and bottom allow electrical pumping with current passing through the structure, and a circular hole is etched into one of the contacts to allow the output beam to emerge. The entire length of the VCSEL is on the order of 10 µm for devices operating in the 900-nm region. To generate a diffraction-limited output in that region, the emitting-region diameter has to be kept below 4 µm, with power levels in the 5-mW range, but higher multimode powers are possible as a result of recent work, with Princeton Optronics claiming operation of a 5 W, 976 nm, CW laser with a 300 µm aperture. 2 Once VCSELs became commercially available, they found widespread application as 850-nm- region transmitters for short-haul, high-bandwidth fiberoptic data links. The circular output beam of VCSELs and comparatively small natural divergence make their coupling to fibers an easy task. Because of the extremely short optical cavity, the devices are operate on a single frequency and can be turned on 2 See http://www.princetonoptronics.com/technology/technology.php#VCSEL.

OCR for page 154
ACTIVE ELECTRO-OPTICAL COMPONENT TECHNOLOGIES 159 FIGURE 4-3 Several different VCSEL structures. SOURCE: Courtesy of Princeton Optronics, Inc. and off in the 10-100 ps range, allowing direct current modulation for data rates in the tens of gigahertz. Since the fabrication process is similar to semiconductor integrated circuits (ICs), it is possible to construct many devices at one time, providing a major cost advantage over edge-emitting diode lasers. Finisar 3 claims to have shipped over 150 million devices, which also find application as the source in an optical mouse. Future large-scale applications are likely in optical interconnects for high-speed electronics systems and in fiber-to-the-home. More recently several groups have developed arrays of VCSEL devices to produce much higher total powers, although the beam quality is not high since the devices are incoherently combined. The main challenge for high-power arrays is heat removal, where the desire to have a high areal density of devices to improve brightness conflicts with the need to remove power. Princeton Optronics has demonstrated 230 W of 976-nm CW power from a 5 × 5-mm array and 925 W of QCW power from the same area. 4 VCSELs had been limited in wavelength coverage to 650-1,300 nm primarily because the GaAs- based technology used to manufacture the electrically pumped devices cannot extend beyond that region. Recently, however, devices operating in the 1,500-nm region have been reported, based on the InP semiconductor, and are expected to find wide use for higher-speed fiber applications where the low dispersion of silica fibers at 1,500 nm allows 10 GHz and higher long-distance links. Development of the longer-wavelength VCSELs is intense at this writing, given their large potential in the telecom industry. Quantum Cascade Lasers Unlike the interband diode lasers just discussed, quantum cascade lasers (QCLs) rely on a totally artificial gain medium made possible by quantum-well structures, a process sometimes referred to as band-structure engineering. The gain from a QCL relies on transitions entirely from electronic states within the conduction band of the material, as modified by the quantum well structure. Typical devices are fabricated on InP and employ GaInAs/AlInAs quantum wells, which are formed from nanometer-thick 3 See http://www.finisar.com/sites/default/files/pdf/Finisar_infographic_timeline_web.pdf. 4 J.-F. Seurin, C.L. Ghosh, V. Khalfin, A. Miglo, G. Xu, J.D. Wynn, P. Pradhan, and L.A. D’Asaro, 2008, “High-power vertical-cavity surface-emitting arrays,” in High-Power Diode Laser Technology and Applications VI, M.S. Zediker, ed., Proc. SPIE 6876: 68760D.

OCR for page 154
160 LASER RADAR layers, typically 10-15 in number. Devices based on other material combinations have been demonstrated but show inferior performance. Since the energy separation of the states is a function of the structure, it can be adjusted to generate a wide range of wavelengths, covering approximately 3.5-30 and 60-200 µm, with the latter falling in the terahertz region. The gap in long-wavelength coverage is due to the strong phonon absorption region of the semiconductor, which overcomes the gain in the quantum wells, while the short-wavelength limit is set by several factors, including fabrication difficulties with the required quantum-well alloys and losses due to scattering among different energy states in the conduction band. Different materials systems, now under development, allow operation at shorter wavelengths. Laser operation comes through injection of a current of electrons through the quantum-well structure, resulting from a voltage applied across the structure. Both the efficiency and gain would be low if the device employed just one quantum well, since the energy of the emitted photon is small compared to the energy required to put an electron in the conduction band. This drawback is overcome by fabricating a structure consisting of multiple (typically 25-75) quantum wells in series, with another heterojunction structure, the “injector” through which the electron tunnels from one quantum well to another. Figure 4-4 is a simplified, partial schematic of a QCL structure with distance as the horizontal axis and energy as the vertical, showing the energy levels for the electron with voltage applied across the structure length that creates the gradient in energy. The QCL name comes from the “cascade” process of one electron interacting with multiple quantum wells as it crosses the structure. The one electron creates multiple photons, corresponding to the number of quantum wells in the device, thereby greatly improving the efficiency. As with edge-emitting interband lasers, the laser power is emitted perpendicular to the current flow through the layers, so that a given quantum well interacts with only a thin slice of the laser mode. One of the remarkable features of QCLs is that they are able to operate in the mid- and long-wave IR regions without the need for cryogenic cooling, even in the CW mode. This is not possible with interband diode lasers at the same wavelengths, since the high thermal population of the conduction band turns such narrow-bandgap diodes into, essentially, short circuits at ambient temperatures. FIGURE 4-4 Simplified diagram of a quantum cascade laser. SOURCE: Courtesy of Laser Focus World.

OCR for page 154
ACTIVE ELECTRO-OPTICAL COMPONENT TECHNOLOGIES 161 Interband Cascade Lasers A related semiconductor laser that has features of both interband and cascade diode lasers is the aptly named interband cascade laser (ICL). The operation of the device relies on multiple quantum-well structures, but now both electrons and holes are involved, and the quantum-well transitions are between an upper state in the conduction band and a lower state in the valence band. The material system commonly used involves either GaSb or InAs substrates, with alloys of InAs, GaSb, and AlSb employed for the quantum-well and injector structures. There are two advantages of the ICL over the QCL: (1) shorter-wavelength operation (at room temperature from 2.9 to 5.7 µm, with cryogenic operation from 2.7 to 10.4 µm), and (2) the ability to lase with much lower electrical powers (by a factor of 30 or so) than QCLs, since the transition lifetimes are much longer. As with conventional interband edge-emitting diode lasers, one would expect difficulty with long-wavelength operation at room temperature, and QCLs have an advantage there. The current state of the art is summarized in Appendix C, Table C-2, for both QCLs and ICLs, but, as with conventional interband lasers, these numbers will change in the next 10-15 years. Solid-State Lasers The laser transitions for solid-state lasers occur (with a few exceptions) between energy levels of ionized atoms from the 3-D transition-metal group (Sc to Zn), from the ionized rare earth group, or from the lanthanide series of elements (La to Yb). The atoms are “hosted” in solids (crystals, glasses or ceramics) and are added as dopants to the mix of elements used to make the solids. The ions giving rise to laser operation are referred to as the active ions. The first laser to operate was based on an artificial ruby crystal, which, in fact, consists of 3-D transition-metal Cr3+ ions hosted in Al 2 O 3 , a material known as corundum or sapphire. (“Sapphire” to the technical community is a transparent crystal, not to be confused with the naturally occurring, colored gemstone of the same name.) For transition metals, laser operation can be found from doubly to quadruply ionized atoms. The 3-D electronic states of transition metals have a reasonably large spatial extent, and thus interact strongly with the host crystal. Accordingly, the laser properties of transition- metal-doped solid-state lasers depend strongly on the host crystal. In contrast, for the rare earths the laser transitions are primarily among different energy levels of 4f electronic states, and these, because they have a much smaller spatial extent than the 3-D states, are much more insensitive to the host crystal. With a few exceptions, a given rare earth in the same ionization state (almost exclusively triply ionized) will provide laser operation at a wavelength that is nearly independent of the host crystal. The interaction between the host crystal and the active ion also has a profound effect on the span of frequencies/wavelengths where there is optical gain for the laser, referred to as the “linewidth.” In the case of the rare earths, and for some levels of the transition metals, when the ion changes electronic state, the surrounding atoms (the lattice) stay in the same position. Figure 4-5, left, illustrates the case when there is no lattice change, with a drawing referred to as a configuration-coordinate diagram. The horizontal axis is some measure of the lattice position (say, the separation of all the surrounding atoms from the active ion) while the vertical axis represents the combined energy of the lattice and the active ion. In the so-called harmonic approximation, the system energy increases quadratically as the coordinate moves on either side from the lowest-energy, or equilibrium position. In the classical physics view, this movement will occur as the lattice vibrates from being at some finite temperature. In a more quantum- mechanical treatment, even at absolute zero, the uncertainty principal means that there is a distribution of displacements about equilibrium. The quantum-mechanical treatment also shows that the vibrational energy is quantized, with the vibrational quantum states called “phonons.” From Figure 4-5, left, assuming that there is no displacement in the equilibrium position between the ground state and the excited state, and that the quadratic change energy with displacement is the same, then the energy

OCR for page 154
162 LASER RADAR difference for light being absorbed (upward transition) and that being emitted (downward transition) is the same and independent of the displacement. One key and generally valid assumption is that the electronic transition occurs much faster than the lattice vibrates (Franck-Condon approximation) and so the lattice remains “frozen” during the transition. In the diagram, this is why upward and downward transitions are shown as vertical arrows. In this simple picture, the spectral width of both absorption and emission would be zero, and would be at the exact same energy. In reality, other effects, especially that of phonons in disrupting the phase of the light making the transitions, lead to a finite linewidth that is a tiny fraction of the transition energy, on the order of 10-4 to 10-5 of the transition frequency/wavelength. This is several orders-of- magnitude smaller than for semiconductor diode lasers. Figure 4-5 right, shows a configuration coordinate diagram for the case when the electronic transition causes the lattice to shift its equilibrium position. One important dynamic is that when the active ion is put into the excited state, the lattice reaches its new equilibrium position on a timescale of picoseconds. As noted, when an electronic transition takes place, it is much faster than this, but for the ions of interest, the probability of this transition actually occurring is very low on a picosecond timescale, so the lattice gets to its new position (“relaxes”) before any light is emitted by a transition back to the ground state. Two important things result from this. First, the energies, and hence the frequencies of light absorbed in electronic transitions from the ground state to the excited state, are generally higher than for light emitted by transitions from the excited to the ground states. Second, it is evident from Figure 4-5 and the lengths of the arrows, that there can be a range of energies for either absorption or emission, due to the displacement of the two parabolas. The linewidths for absorption and emission can be very broad, on the order of 10-20 percent of the central energy. As noted in the introductory portion of this section, unlike semiconductor diode lasers, which convert electrical power directly to laser power, solid-state lasers need a source of light to operate, which typically is driven by electrical power. The optical pumping process requires that the solid medium absorb light, which then must result in a population inversion between the upper and lower laser levels. FIGURE 4-5 Left: Configuration coordinate diagram, no change in crystal. Right: Same but with a shift in crystal atomic positions.

OCR for page 154
ACTIVE ELECTRO-OPTICAL COMPONENT TECHNOLOGIES 163 A simple electronic system with just two energy levels and no lattice displacement (Figure 4-5 left) cannot be used to make a laser, since the material becomes transparent when a population inversion occurs. At least one higher level is needed to absorb the light, but that level must be able to transfer its excitation to the upper laser level. If the laser transition then occurs to the lowest energy level of the active ion, laser action is possible only when a substantial fraction of the ions are pumped into the upper laser level. This type of laser operation is referred to as “3-level.” If the laser operation is not to the lowest energy level but to a higher level, and that level is high enough in energy that in thermal equilibrium the population of the level is small enough to not affect laser operation, the laser operation is “4-level.” For systems where the lower level is above the ground state but has a non-negligible population, the designation is sometimes called “3½-level.” In Figure 4-5 right, it is evident that, even with only two electronic levels, it is possible to have 4-level operation. As marked in the diagram, pump light creates a transition from the ground state in equilibrium position (1) to the first excited state in nonequilibrium (2), which then relaxes to the equilibrium position (3) and makes a laser transition to the nonequilibrium ground state (4). This type of laser operation is often called phonon-assisted or vibronic. The ruby laser is an example of a 3-level laser, with a narrow-line (nonvibronic) laser transition (the so-called “R-line”) terminating on the ground state but with the pumping absorption lines caused by broadband vibronic transitions. These, with very high probability, deliver their excitation to the upper laser level by a so-called nonradiative process that makes up the difference in energy between the pump levels and the upper laser level by the generation of multiple phonons, often called “multiphonon relaxation.” The measured absorption coefficient for ruby shows three broad peaks at 250, 400, and 550 nm due to vibronic transitions, all of which are effective in exciting the upper level of the R-line transition at 694.3 nm. Shortly after the ruby laser was demonstrated, laser researchers were able to operate solid-state lasers based on rare earth elements. The challenge for rare earths was in finding the right combination of energy levels among all the possible ions from that series. The most effective dopant was quickly established as neodymium (Nd3+); a partial, simplified energy-level diagram for the ion appears in Figure 4-6. The most common laser transitions for the ion occur in the 1050-1,080-nm region, from level E3 to level E2. Optical pumping is possible from the ground state (E1) to a multiplicity of levels (E4), all of which are efficient in exciting level E3 by multiphonon relaxation, with energy cascading down one level to another and finally ending up in the upper laser level. A key reason that the Nd3+ ion works well as a laser is that there is a large gap in energy from E3 to the next-lower level, so much that there is very little probability of multiphonon relaxation, and energy that winds up there can be extracted effectively by laser action. Another key to the success of Nd3+ lasers is that level E2 is sufficiently above the lowest energy level so that it has only a negligible population at normal temperatures. On the other hand, energy in that level can rapidly cascade down to the ground state, and not build up to the point that laser action stops. Thus, the Nd3+ ion provides true 4-level operation. The right-hand side of Figure 4-6 indicates that the simplified energy levels shown as thick bars are in fact composed of multiple levels, a result of the splitting of the energy states of the rare earth ions by interactions with the surrounding atoms of the host crystal. The cluster of multiple levels indicated by the thick bars are often referred to as “manifolds.” Given all these conditions for effective laser operation, it may not be surprising that, among the total of 13 triply ionized rare earth ions that could support laser operation, only Nd3+ has all of the characteristics just described. The other common rare earth laser ions, which include Er3+, Tm3+, Ho3+, and Yb3+, will be discussed below, but they all have one or two unfavorable characteristics that must be overcome for effective laser operation. As discussed below, a combination of laser-based optical pumping and the use of fiber formats has opened up a wide variety of uses for these ions.

OCR for page 154
164 LASER RADAR FIGURE 4-6 Simplified energy-level diagram for an Nd3+ ion. SOURCE: Courtesy of Rani Arieli, Weizmann Institute of Science. An alternative Cr3+-doped crystal, BeAl 2 O 4 , known as alexandrite from the gem of the same name, emerged from work in the late 1970s and presented a case where the level giving rise to the vibronic transition was, like ruby, higher in energy than the R-line level but close enough so that it was partially excited by optical pumping. Vibronic laser action is possible at room temperature around 760 nm, but the optical gain is low and the tuning range is reduced due to excited state absorption (ESA). By running the crystal well above room temperature, it is possible to improve the laser characteristics and provide tuning from about 720-860 nm. The low gain has limited applications in areas requiring generation of high-energy pulses. The demonstration in 1982 of the Ti3+-doped sapphire (Ti:sapphire) laser provided a system with a well-known host crystal and a new laser ion. The single d electron of the Ti3+ ion provided a system similar to that diagrammed in Figure 4-5 right, with somewhat more complexity due to the details of the vibronic levels. ESA is not a factor since there are no higher-lying 3-D electronic levels and, in sapphire, other possible transitions are too high in energy. Absorption and emission intensities as a function of wavelength appear in Figure 4-7, along with the spectral distribution of the optical gain, which is red- shifted owing to the details of the Einstein relation of the measured emission spectra to the gain spectrum. The half-width of the gain spectrum is about 100 THz in frequency space, the widest of any known laser medium, and the system has been tuned from 660 to 1,100 nm. A unique class of solid-state media has recently emerged, based on divalent transition metals doped in semiconductors, the most well-developed of which is the dopant Cr2+ in ZnSe. As with Ti:sapphire, the vibronic transitions lead to very broad linewidths, but the transitions are centered in the mid-IR region, around 2,500 nm for the case of Cr:ZnSe, which has been tuned from about 2,000-3,500 nm. Further discussion of solid-state lasers below considers the two main formats for these systems, bulk and fiber.

OCR for page 154
244 LASER RADAR commonly silicon, BK7 glass, sapphire, and germanium. Key to coldfilter performance is high transmission in the range of interest and high rejection outside that band. Impact of Operating Parameters The operating efficiency of a Stirling cycle cryocooler can be defined by a figure of merit called the coefficient of performance (COP). For an ideal Stirling cycle refrigerator, the COP can be determined from the two temperatures at which the cycle is operating: • T H , the ambient temperature at which the cryocooler is rejecting heat • T C , the refrigeration temperature at which the cryocooler is maintaining the IR detector. These two temperatures define the COP: 𝑇𝐶 𝐶𝑂𝑃 = 𝑇 𝐻 − 𝑇𝐶 Inspection of this relationship reveals that the COP rises (the cryocooler efficiency increases) as the refrigeration temperature is increased and/or the ambient temperature at which the cooler is rejecting heat is decreased. For a fixed ambient temperature, this is illustrated in Figure 4-57. This characteristic of the Stirling cycle cryocooler is being exploited by IR detector developers as they refine their detector arrays to operate at warmer and warmer temperatures. By increasing the operating point of a detector from 77 K to 110 K, as an example, the COP of a Stirling cryocooler operating in a 23°C ambient rises from 0.35 to 0.55, an increase of 68 percent. This increase in efficiency results in a corresponding drop in required input power. In addition to the increased cycle efficiency brought about by the increase in detector operating temperature, the reduced temperature differential between the detector and the ambient temperature results in a reduced parasitic heat leak from the IDCA environment through the detector packaging (the Dewar), thereby reducing the refrigeration demand placed on the cryocooler. This effect is also illustrated in Figure 4-57. This combination of increased efficiency and reduced refrigeration demand results in less power being dissipated by the cryocooler, which in turn reduces the heat that must be managed by the system integrator. FIGURE 4-57 Impact of FPA operating temperature. SOURCE: D. Rawlings, DRS Technologies, Dallas, Texas.

OCR for page 154
ACTIVE ELECTRO-OPTICAL COMPONENT TECHNOLOGIES 245 FIGURE 4-58 Advanced integrated Dewar cooler FIGURE 4-59 High-operating-temperature IDCA assembly. SOURCE: DRS Technologies, Dallas, Texas. camera core. SOURCE: DRS Technologies, Dallas, Texas. IDCA designers have taken advantage of these increasing array operating temperatures to develop lighter and lower power IDCAs. IDCAs with 640 × 480 pixel mid-wave IR (MWIR) detectors operating on as little as 2 W of input power at room temperature are being incorporated into IR sensor systems. One such IDCA is shown in Figure 4-58. Figure 4-59 shows a complete camera core based on a version of that IDCA. This module is about 2 × 2 × 2 inches, including the video processing electronics and runs on less than 5 watts of input power. Conclusion 4-18: Cooling detector arrays such as those required for HgCdTe avalanche photodiodes operating in the mid-wave infrared band are not a major impediment from a size and power consumption standpoint. Already, state-of-the-art Dewar-cooler technologies, particularly those based on linear drive technology, are getting as small as 5 × 5 × 5 cm and power consumption of a few watts, comparable to those of thermoelectric coolers used for cooling detector arrays to 230 K. MEMS- based coolers are under development that will further reduce the size and power requirements. TELESCOPES Telescopes are often used on both transmit and receive. A single aperture for both transmit and receive, called a monostatic system, may be used. In a monostatic system, an approach must be developed for T/R isolation. If near-field reflections feed back into the receiver, it can be damaged or blinded. Alternately, in a bistatic system, separate apertures are used for transmit and receive. Bistatic systems do not have much of a T/R isolation issue since the apertures do not share common optics. Near-field reflections in theory could cause an isolation issue. For image plane imaging, optics are needed to transfer the captured field from the pupil plane to a focus at the image plane. Apertures can be various sizes, depending on need. No optics are required for pupil plane imaging, at least in theory. The received signal can be captured directly without using any telescope. In reality a telescope may be used to magnify the effective size of the receive focal plane array. While there is no need in this report to discuss the various types of telescopes, there is value in discussing the idea of arrays of subapertures. That approach is considered in the MIMO section in Chapter 3.

OCR for page 154
246 LASER RADAR ADAPTIVE OPTICS As will be described in more detail in Chapter 5, the atmosphere affects active EO sensors in three main ways: absorption, scattering, and refractive index variations (which can cause beam spreading or fluctuations). For active EO sensors these effects occur both for the illumination beam and for backscatter from the object being viewed by the receiving sensor. This section will describe methods to correct for distortion in active imaging caused by variations in the index of refraction of the atmosphere. The effects of atmospheric turbulence are discussed in Chapter 5; they can be described as either thin turbulence or extended turbulence (see Figure 5-11). Adaptive optics systems can provide phase compensation for thin turbulence and some compensation for extended turbulence. Adaptive Optics Systems If the receive optics for a sensor are larger than the Fried parameter r 0 of the atmosphere, 167 an adaptive optics mirror can be put in the receive path to compensate for phase distortions. If the transmit optics are larger than the r 0 of the atmosphere, again adaptive optics can be used to compensate and therefore to form a beam with less divergence. There are two main parts to any adaptive optics system. One part is the device that actually imposes the phase shift. This can be a mirror with many actuators behind it, or it can be a liquid crystal device or some other device to modulate phase. Adaptive optics mirrors may use electro-strictive (lead magnesium niobate, or PMN) actuators, piezoelectric-driven actuators, or other approaches. The second part is the control system to decide what phase shifts to impose. To determine what phases need to be imposed, the incoming signal is first measured. While multiple measurement instruments may be used, the most popular is the Shack Hartman sensor, shown in Figure 4-60. A second method of determining the required phase shift is a metric that judges the quality of the compensation. One such approach is called stochastic parallel gradient descent, SPGD, where increments toward an ideal metric are chosen along a gradient. FIGURE 4-60 Shack Hartman wavefront sensor. SOURCE: By 2pem (Own work) [CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons. http://en.wikipedia.org/wiki/ Adaptive_optics, downloaded May 17, 2013. 167 The Fried parameter r0 is a measure of the effects on the quality of optical transmission through the atmosphere attributable to random inhomogeneities in its refractive index. It is defined as the diameter of a circular area over which the root-mean-square aberration due to passage through the atmosphere is equal to 1 radian. As such, imaging from telescopes with apertures much smaller than r0 is less affected by atmospheric inhomogeneities than by diffraction due to the telescope’s small aperture. However the imaging resolution of telescopes with apertures much larger than r0 (i.e., all professional telescopes) will be limited by the turbulent atmosphere, preventing the instrument from approaching the diffraction limit.

OCR for page 154
ACTIVE ELECTRO-OPTICAL COMPONENT TECHNOLOGIES 247 Another issue for an adaptive optics system may be having a point source to use as a guide for finding the optimum phase corrections. Sometimes an artificial point source is created, called a guide star. 168 Schemes for Countering Extended Turbulence Extended turbulence phase changes away from the pupil plane will result in amplitude changes at the pupil plane. Adaptive optics mirrors do not exist that change both phase and amplitude. When amplitude is changed it only decreases, creating a loss to be avoided. An alternative method of compensating for extended turbulence can occur for coherent imaging, whether it is spatial or temporal heterodyne. Once the field is captured at the pupil plane, virtual phase screens can be inserted in the computer at various ranges, and a sharpness metric can be used to judge the influence of the phase screens. These phase screens can be changed until an image with the best sharpness is reached. One limit is the amount of beam spreading that can be tolerated for the illuminator beam. If flash illumination is used, then this is not a difficult limit. Assume even a 32 × 32 pixel FPA on receive. This means the illuminator beam can be 32 times larger in angle than a pixel, and the illuminator aperture can be 32 times smaller than the receive aperture. For a 20 cm receive aperture the transmit aperture can be less than 1 cm. Another way to think of this is that the Fried parameter can be less than 1 cm without interfering with the illuminator beam. Another limit is the turbulence on receive. This will limit angular resolution. The diffraction on receive will be limited to the diffraction associated with an aperture diameter equal to the Fried parameter unless some form of compensation is employed. If an adaptive optics device were invented that could not only impose an array of phase changes on an incoming beam, but could also amplify or decrease the signal level in each pixel, then it would be possible to physically compensate for extended turbulence. Conclusion 4-19: Any long-range active EO sensor will be limited by the atmosphere. The amount of limitation will vary from hour to hour and day to day. Atmospheric compensation can extend the operational range. Conclusion 4-20: The ability to compensate for extended turbulence will greatly aid long-range active EO sensors. PROCESSING, EXPLOITATION, AND DISSEMINATION Processing, exploitation, and dissemination (PED) refers to the set of steps required to transform sensed photons into usable information. The processing step refers to the initial transformation of sensed photons into an image that, in turn, can be exploited in the context of an application such as robot navigation. The dissemination step refers to delivery of the information to its ultimate consumer(s)—for example, analysts seeking to understand an environment using change detection on a time series of images. There may be multiple types of processing involved, for example, image formation from ROICs (see Figure 4-61), followed by image compression to reduce bandwidth required for real-time dissemination of imagery from an airborne sensor. Table 4-5 provides some PED requirements for some selected ladar applications. 168 See http://en.wikipedia.org/wiki/Laser_guide_star, download May 17, 2013.

OCR for page 154
248 LASER RADAR TABLE 4-5 PED Requirements Versus Selected Application Domains Application Possible platform Processing Exploitation Dissemination Surveillance UAV Image formation Precise positions High bandwidth and refinement, for objects and desirable, but not image/video actors in real-time easily achievable selection and with RF compression Mapping Airborne imager 3-D image Precise 3-D maps Can be gathered formation, of terrain, e.g., and stored until registration to knit urban terrain return to base multiple swaths together Local navigation Driverless Image formation Avoiding obstacles Not applicable; automobile in real time intended for local use FIGURE 4-61 Abstract model of steps from detector to analyst. SOURCE: National Research Council, 2010, Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays, The National Academies Press, Washington, D.C., Figure 4-9. Target Tracking Target tracking is a general technique useful in a variety of application domains. An easy to understand example is tracking of pedestrians and other vehicles in autonomous vehicle navigation. Because ladar is much more precise than radar for this application, it could generate reliable trajectories. Target tracking 169 first involves detection and acquisition. Once a target is acquired, the illuminator must somehow be periodically directed toward the target as its direction and velocity are tracked, to guarantee that sufficient reflected information is available for real time track maintenance. 169 C. Grönwall, G. Tolt, D. Karlsson, H. Esteki, J. Rydell, E.E. Armstrong, and J. Woods, 2010, “Threat detection and tracking using 3D FLASH ladar data,” Proc. SPIE 7696: 76960N.

OCR for page 154
ACTIVE ELECTRO-OPTICAL COMPONENT TECHNOLOGIES 249 Various processing schemes can be employed, from tracking classified objects (object classification is discussed in the section below) to simply tracking centroids of moving objects. 170 Target Classification Ladar target classification basically consists of image matches against a set or sets of known characteristics. The goal is often not an exact match, which would be difficult, but rather a best match or a match within a known threshold or error bound. A commercial use might be pedestrian detection for an autonomous vehicle. Ladar target classification is an active area of research, with a variety of recent advances reported in the technical literature—for example, using ensembles of classifiers 171 or using machine learning techniques such as support vector machines(SVM). 172 Developing 3-D Maps Ladar has found a variety of mapping applications, some of which exploit the ability of the technology to produce 3-D imagery. Examples include archeology, 173 forestry, 174 and geology. 175 Mapping is typically performed as an aerial surveying activity; data are collected and partially processed, then stored on the aircraft. The recorded survey data are then post-processed; e.g., swaths are combined into a map and then analyzed. 3-D Target Metrics For monochromatic 2-D imagery the National Imagery Interpretability Rating Scale, NIIRS, rating is used extensively to characterize image quality. Leachtenauer et al.176,177 empirically determined a method to calculate the NIIRS rating using various image metrics, such as the ground sample distance (GSD) and the relative edge response (RER). 170 P. Morton, B. Douillard, and J. Underwood, 2011, “An evaluation of dynamic object tracking with 3D LIDAR,” Australasian Conference on Robotics and Automation (ACRA), Melbourne, Australia: Australian Robotics and Automation Association (ARAA). 171 Z.-J. Liu, Q. Li, Z.-W. Xia, and Q. Wang, 2012, “Target recognition for small samples of ladar range image using classifier ensembles,” Opt. Eng. 51(8): 087201. 172 Z.-J. Liu, Q. Li, and Q. Wang, 2013, “Random subspace ensemble for target recognition of ladar range image,” Opt. Eng. 52(2): 023203. 173 F. Diep, 2012, “How lasers helped discover lost Honduras city,” TechNews Daily, June 7, http://www.technewsdaily.com/5837-lasers-helped-discover-lost-honduras-city.html, accessed May 30, 2013. 174 R.A. White and B.C. Dietterick, 2012, Use of LiDAR and Multispectral Imagery to Determine Conifer Mortality and Burn Severity Following the Lockheed Fire, U.S. Forest Service, General Technical Report PSW- GTR-238, pp. 667-675, http://www.fs.fed.us/psw/publications/documents/psw_gtr238/psw_gtr238_667.pdf. 175 R.A. Haugerud, D.J. Harding, S.Y. Johnson, J.L. Harless, C.S. Weaver, B.L. Sherrod, 2003, “High- resolution lidar topography of the Puget Lowland, Washington,” GSA Today, Geological Society of America, June. 176 J.C. Leachtenaur, W. Malila, J. Irvine, L. Colburn, and N. Salvaggio, 1997,”General image quality equation: GIQE,” Applied Optics 36(32): 8322.

OCR for page 154
250 LASER RADAR Leachtenauer et al. later used the same empirical methodology applied to infrared images. 178 Thurman and Fienup provided physical explanations for many of the parameters in the General Image Quality Equation (GIQE). 179 Equation 1 is a generic version of the GIQE. Various constants can be used in this equation, representing somewhat different forms of the GIQE. G NIIRS = c0 + c1Log 2 (GSD) + c2 Log 2 ( RER) + c3 + c4 J , SNR where NIIRS is the image quality rating assigned in accordance with the National Image Interpretability Rating Scale, RER is the relative edge response, J is the mean height overshoot caused by edge sharpening, G is the noise gain resulting from edge sharpening, and SNR is the signal to noise ratio. Recently Kammerman wrote a draft of a paper developing an information theoretic approach to calculate the NIIRs rating for monochromatic 2-D images, as compared to an empirical method. 180 Ideally, this information-based theory would be a better predictor of NIIRS ratings for monochromatic visible imagery and would also provide a path to generate useful metrics for other imaging modalities, such as IR imagery, radar, and 3-D ladar imagery. At this time, there are no good 3-D metrics allowing a comparison of various 3-D imaging sensors, much less allowing them to be compared against other sensing modalities. Conclusion 4-21: It would be very useful to be able to compare the value of various 3-D images, especially if they could also be compared against other sensing modalities. Recommendation 4-1: Three-dimensional metrics should be developed in such a way that the ability of a given sensor to perform a given function can be compared against the ability of another sensor to perform the same function. 3-D imagery can be developed in multiple ways: for example, using stereo from passive sensors vs. using a 3-D active EO sensor, or using 3-D ladar vs. interferometric synthetic aperture radar. Metrics are needed to be able to compare the products of different sensing modalities. FIGURE 4-62 Communication model for ladar used for ISR on a UAV. 178 J.C. Leachtenauer, W. Malila, J. Irvine, L. Colburn, and N. Salvaggio, 2000, “General image-quality equation for infrared imagery,” Applied Optics 39(26): 4826-4828. 179 S.T. Thurman and J.R. Fienup, “Analysis of the general image quality equation,” Proc. of SPIE 6978 69780F-1. 180 G.W. Kamerman, “On Image Information Density,” Optical Engineering to be published, 2013.

OCR for page 154
ACTIVE ELECTRO-OPTICAL COMPONENT TECHNOLOGIES 251 Data File Size, Compression, Dissemination, and Communication Bandwidth Requirements A given sensor produces data at some rate, and then the application domain dictates whether data is stored or communicated. Whether stored or communicated, it may be compressed to a more compact form, ideally with minimal loss of information content. One reason for compression of stored data might be to make better use of available storage—for example, to store mapping data for a greater area of coverage or to extend mission duration. As Table 4-5 suggests, navigation of completely autonomous vehicles faces very little in the way of PED challenges, as the data is both produced and consumed locally by the navigation subsystem of the vehicle. On the other hand, as discussed in an earlier NRC report Seeing Photons, 181 unmanned aerial vehicles (UAVs) used for intelligence, surveillance and reconnaissance (ISR) may represent the most challenging ladar application for communications, as the data are meant to be consumed at a considerable distance from where they are gathered; an abstract view of this application is shown in Figure 4-62. Since the Wireless Link W is typically highly constrained relative to the sensor capabilities, compression is often employed between the sensor and the ground station. There are many forms such compression can take, including sampling, lossy compression, image compression, video compression, and compressive sensing. There is an interesting tradeoff between placing more computational intelligence in the sensor to provide sensor-side analysis and the SWaP considerations of the airborne sensor. Fusion or Synergy with Other Sensing Modalities Ladar is extremely powerful as a complement to other sensing capabilities. For example, some combination of GPS, radar, video cameras, and ladar was used by almost all entrants in the DARPA Grand Challenges and the Urban Challenge to maximize the operating capabilities of the self-driving vehicles. The absolute frame of reference for navigating the vehicle could be determined to a reasonable degree of precision with differential GPS, but obstacle detection is best performed with one of the other sensor technologies. Conclusion 4-22: Ladar may be most powerful when viewed as part of a system of complementary sensors rather than as a stand-alone multifunction sensor. Conclusion 4-23: Use of ladars may be most cost-effective where they offer sensory capabilities (such as precision stand-off ranging and vibrometry) that are difficult for other sensors to achieve. Computational Requirements, Processing Throughput, and Processor SWaP Figure 4-63 shows the data processing steps associated with the ALIRT 3-D ladar system discussed in Chapter 2. One set of steps 182 in image formation from Jigsaw 183 data are illustrated in Figure 4-64. Each of these steps can consume substantial computing time (as shown in Table 4-6), which affects the total delay from the detection to the image or other exploitable data product. 181 National Research Council, 2010, Seeing Photons: Progress and Limits of Visible and Infrared Sensor Arrays, The National Academies Press, Washington, D.C. 182 P. Cho, H. Anderson, R. Hatch, and P. Ramaswami, 2006, “Real-time 3D ladar imaging,” Lincoln Laboratory Journal 16(1): 147. 183 R.M. Marino and W.R. Davis, 2006, “Jigsaw: a foliage-penetrating 3D imaging laser radar system,” Lincoln Laboratory Journal 15(1): 23.

OCR for page 154
252 LASER RADAR FIGURE 4-63 ALIRT data processing steps. SOURCE: Dale G. Fried, 2013, “Photon-counting laser radar development at MIT Lincoln Laboratory,” April 24. Courtesy of MIT Lincoln Laboratory. FIGURE 4-64 Steps in processing Jigsaw data. SOURCE: P. Cho, H. Anderson, R. Hatch, and P. Ramaswami, 2006, “Real-time 3D ladar imaging,” Lincoln Laboratory Journal, 16(1): 147. Reprinted with permission of MIT Lincoln Laboratory. Since the goal of this system is real-time 3-D imagery, 8.2 s of compute time per second of the data stream is unacceptable. Therefore, additional computing resources are brought to bear in a mode of computing called parallel processing, where multiple computations are carried out simultaneously. Fortunately, the algorithms allow parallel operation, and a set of 10 computers, as illustrated in Figure 4- 65, can be employed to reduce the computation time to enable real-time operation.

OCR for page 154
ACTIVE ELECTRO-OPTICAL COMPONENT TECHNOLOGIES 253 TABLE 4-6 Computing Resources Consumed per Second of Imagery Algorithm Task Processing Time/Real Time Raw data input 0.1 Cartesian integration 3.0 Ground detection 0.2 Response deconvolution 3.2 Static voxel determination 1.1 XYZ file output 0.7 Total 8.2 NOTE: Single processor timing results measured on a 3 GHz Pentium machine. SOURCE: Peter Cho, Hyrum Anderson, Robert Hatch, and Prem Ramaswami, 2006, “Real-time 3D ladar imaging,” Lincoln Laboratory Journal, 16(1): 147. Reprinted with permission of MIT Lincoln Laboratory. FIGURE 4-65 Parallelization of computation in Figure 4-64. SOURCE: Peter Cho et al., op. cit. Reprinted with permission of MIT Lincoln Laboratory. FIGURE 4-66 Data reduction in a compute node from Figure 4-65. SOURCE: Peter Cho et al., op. cit. Reprinted with permission of MIT Lincoln Laboratory. An additional benefit of some steps is that they reduce the data rate through the system to a rate that is more sustainable by serial processing techniques, in preparation for delivery of imagery, as shown in Figure 4-66.

OCR for page 154
254 LASER RADAR Currently, data from high-performance ladar sensors requires parallel processing. Processing capabilities with abundant parallelism, such as many core graphics processor units (GPUs), are commercially available, but SWaP requirements for UAVs are not a commercial consideration. DARPA’s Power Efficiency Revolution for Embedded Computing Technologies (PERFECT) Program184 has UAVs as a target application and seeks methods to move from the current energy performance level of 1 GFLOP 185/W to greater than 75 GFLOPS/W. Interconnect architectures for some abundantly parallel machine architectures may be ill-suited to processing some sensor streams, but for others the performance gains may be substantial. 184 See http://www.darpa.mil/Our_Work/MTO/Programs/Power_Efficiency_Revolution_for_Embedded_ Computing_Technologies_(PERFECT).aspx. 185 1 GFLOP is 109 floating point operations per second.