This chapter focuses on emerging active electro-optical (EO) technologies that are rapidly developing and whose implementation is still evolving. The focus is on several coherent systems such as temporal and spatial heterodyning, synthetic aperture ladar, multiple-input, multiple-output (MIMO) imaging, and speckle imaging. Also discussed are emerging approaches in femtosecond sources and quantum technologies. Although these technologies are not fully matured, they may have a significant impact on the field of active EO sensing in the next 5-10 years and beyond.
Color can be a very useful discriminant. People experience this when they compare black and white pictures to color pictures. Color distinctions are based on the difference in reflectivity versus wavelength. Active multispectral EO can complement conventional ladars when viewing solid targets by adding additional surface material discrimination information not available with just 2-D or 3-D imaging. Active multispectral imaging for targets can have an advantage over passive imaging of targets because one can control the illumination source. Therefore even at night near-IR (NIR) wavelengths can be used, whereas there would not be enough signal to use passive multispectral at these wavelengths. An active EO multispectral sensor will combine the benefits of conventional ladar and multispectral wavelengths in a band that has significant color variation in its reflectance. Conventional ladars, and some passive imaging sensors, utilize the shape of a target for detection and/or identification. Two different approaches have been used to deploy active multispectral sensors against hard targets. One is to use the laser wavelengths that are easy to generate, for example 1.064 µm and around 1.5 µm, and take whatever recognition benefit one can gain. The second approach is to determine what wavelengths offer the best active multispectral recognition probabilities and make the lasers appropriate to these discriminants. In the second case, a class separability metric is formulated and optimal wavelengths are selected.1Figure 3-1 shows reflectivity versus wavelength and angle for six different materials. Figure 3-2 shows reflectivity versus wavelength for leaves, showing significant spectral reflectivity changes versus wavelength in the NIR.
Multispectral and hyperspectral sensing depend on variation in reflectivity versus wavelength for surface materials of the object being viewed. The main fundamental limit is that reflectance, or absorption, reflects only the surface properties of a solid object being viewed.
The key technologies for active multispectral imaging are the illuminator, a multispectral laser to illuminate an object at more than one wavelength, and the associated detector technologies.
To achieve active multispectral imaging, one needs laser sources that cover all of the wavelengths of interest. Therefore, developing active multispectral sensing requires that laser sources either have multiple laser lines or be very broadband.
1 M. Vaidyanathan, T.P. Grayson, R.C. Hardie, L.E. Myers, and P.F. McManamon, 1997, “Multispectral laser radar development and target characterization,” Proc. SPIE, 3065: 255.
FIGURE 3-1 Reflectivity versus wavelength and angle for six different materials. SOURCE: D.G. Jones, D.H. Goldstein, and J.C. Spaulding, 2006, “Reflective and polarimetric characteristics of urban materials,” AFRL Tech Report, AFRL-MN-EG-TP-2006-7413.
FIGURE 3-2 Leaves from trees have different spectral qualities. Plot shows leaf reflectivity and transmission. SOURCE: Reprinted from Remote Sensing of Environment, 64/3, G.P. Asner, Biophysical and Biochemical Sources of Variability in Canopy Reflectance, 234-253, 1998, with permission from Elsevier.
As an active system, detection of NIR spectral bands is made possible even at night, and NIR bands have a significant amount of reflectivity variation, or color. This sensor type will be especially useful in high clutter situations or situations where the target is partially obscured.
Active multispectral sensing does not require precision angle/angle resolution because the surface material discrimination is based on the ratio of reflectance at various wavelengths, not on the shape of an object. Therefore, to the first order, active multispectral sensing is not dependent on the diffraction limit, so it can be a useful long-range discriminant. It is also not dependent on exact object shape, which explains its usefulness when a target is partially obscured. The only angular size effect is based on color mixing over a pixel when pixels become larger.
One disadvantage is that active multispectral requires laser illumination at all wavelengths being used. Three bands would require three lasers, or the ability to divide a single laser into sources at each of the three wavelengths, or else sequencing through the bands. An optical parametric oscillator (OPO) can be used to shift the wavelengths transmitted over time, as long as the object being viewed is stationary over that time period. Alternatively, a broadband laser source containing many wavelengths can be used. A broadband laser has the disadvantage of spreading the laser light over that broadband, reducing the available light at any particular wavelength.
Active multispectral EO sensing containing multiple lasers or a single laser shifted to multiple wavelengths, has a low scientific barrier to entry. Many countries have excellent laser technology, which is the first requirement to manufacture multiline or broadband lasers. As a result, the comparative state of the art in this area is not very meaningful. Once it is decided to provide laser sources at multiple wavelengths, it is relatively straightforward to make an active multispectral sensor. That said, development of this technology also requires investment in the associated receivers and their integration into a single unit that meets constraints of size, weight, and power (SWaP) and cost.
As described in Chapter 2, laser radar systems using direct detection (as in 3-D flash imaging) can be limited by the noise in the detector. One method for reducing the effect of detector noise was taken from the radio frequency (RF) community. Heterodyne detection is a method originally developed in the RF domain as a way to convert a received signal to a fixed intermediate frequency that can be more conveniently—and more noiselessly—processed than the original carrier frequency. This practice has been similarly adapted to optics as a way to detect weak signals.
In optical heterodyne detection, a weak input laser signal is mixed with a local oscillator (LO) laser wave by simultaneously illuminating a detector with both signals. For temporal heterodyne it is very important to match the illumination angles of both the LO light and the return signal light across the detector, or else spatial fringes develop. High spatial frequency fringes smaller than a detector can average the interfering signal to zero across the detector. Figure 3-3 illustrates the arrangement of a simple optical heterodyne receiver. A laser transmits a coherent waveform of light toward a target. The reflected light beam is mixed with the reference laser (local oscillator) beam at a beam splitter in the receive optical path and the beams are superimposed on the detector. The resulting photocurrent is proportional to the total optical intensity, which is the square of the total electric field amplitude. If the LO power is increased above all other noise sources, the signal-to-noise ratio becomes limited only by shot noise of the return signal. For temporal heterodyne detection, the reference laser frequency is offset from the laser source by ωif. and the resulting optical intensity has fluctuations at the difference and sum frequencies of the two fields and at double the frequency of each of the fields. The LO frequency is offset so it is possible to determine whether any velocity is toward or away from the sensor. The coherent
FIGURE 3-3 Simple heterodyne (coherent) laser radar configuration.
receiver is usually designed to isolate the difference frequency component from fluctuations and noise at other frequencies.2
For traditional heterodyne detection, the LO field strength must be much higher than the return signal strength in order to mitigate the effects of various noise sources. However, if the detector is sufficiently sensitive, there is little need to mitigate detector noise sources by using a strong LO. If the field strengths are similar, the noise mitigation benefit of a strong LO is lost, but the frequency comparison and narrowband filtering features are still met.
Traditional heterodyne detection with a strong LO has an added challenge if arrays of detectors are desired. While a high-resistance receiver, such as a focal plane array with relatively low bandwidth, can operate with low LO power, other standard high-bandwidth IR detectors used in heterodyne detection systems, such as a linear GHz-bandwidth photodiode, can require as much as 1 mW of LO power or more to reach the shot noise limit. This may result in unacceptable heat loads of greater than 10 W for large arrays of 10,000 elements.3 A 256 × 256 array would require even larger LO powers. Using a photon counting detector allows use of a low-power LO and potentially the ability to use the same detector for both coherent and direct detection. In order to use photon counting detectors such as Geiger-mode (GM) avalanche photodiodes (APDs), the LO strength must be matched to the signal strength. In this case, the noise mitigation effects from the strong LO are lost, but the ability to detect frequency shifts is maintained.
The block diagram for weak LO heterodyne detection is the same as that shown in Figure 3-3, with the detector being replaced by a GM-APD array. A laser transmits a coherent waveform toward a target, the reflected beam is mixed with the LO at a beam splitter in the receive optical path, and the beat signal is detected by the receiver. The object is imaged onto the detector array, but in this case the readout is simply the photon arrival time. The size of the angle/angle resolution element depends on the detector angular subtense and the size of the focused optical spot. To detect more than one photon per angle/angle resolution element per reset of the detector, the receive optics can be constructed so that each focused optical spot is spread across a group of pixels (called a macropixel).4 Each macropixel then acts as a photon-number-resolving detector whose dynamic range is equal to the number of pixels contained in the macropixel. For example, a 32 × 32 GM-APD can be broken up into an 8 × 8 array of photon-
2 P. McManamon, 2012, “Review of ladar: A historic, yet emerging, sensor technology with rich phenomenology,” Optical Engineering 51(6): 060901.
3 L. Jiang and J. Luu, “Heterodyne detection with a weak local oscillator,” Applied Optics 47(10), 1486-1503, (2008).
4 L. Jiang et al., 2007, “Photon-number resolving detector with 10-bits resolution,” Phys. Rev. A, 75(6): 062325.
FIGURE 3-4 Power spectral density (PSD) of the detected current for a temporal heterodyne laser radar receiver, shown for (a) 1 pulse and (b) 100 pulses of incoherent averaging. SOURCE: L. Jiang, E. Dauler, and J. Chang, 2007, “Photon-number-resolving detector with 10 bits of resolution,” Physical Review A 75 (6):062325.
number-resolving detector macropixels. Each macropixel can be a 4 × 4 array of pixels, or 16 pixels, resulting in a 4-bit dynamic range. When plotted as a function of time for a single macropixel, the photon arrival times map out the frequency of the beat signal. The detector readout rate must be high enough to sample the beat frequency between the LO and the return signal, or something must be done to mitigate aliasing. For a single pixel detector this means the beat frequency cannot be sampled if it is higher than half the array read out rate, but the macropixel allows detector bandwidth-limited sampling of the time of arrival of photons. With large enough macropixels, beat frequencies can be sampled up to the detector bandwidth rather than being limited by the detector array frame rate.
In experiments, the return signal for each pulse is coherently integrated—that is, a fast Fourier transform of the photon arrival times for each pulse is performed—then incoherently averaged over multiple pulses—that is, the power spectral densities (PSDs) are summed over all collected pulses. If the LO and transmit signals are offset in frequency by foffset, the beat signal is located at foffset,. If the target has a velocity component along the ladar’s line of sight, the PSD should show a peak at the sum of the offset frequency and the target’s Doppler frequency. The PSD of several return pulses can be averaged to smooth out the curve, as shown in Figure 3-4.
There are important limitations to this technique that limit its applicability and the measurement concept of operations (CONOPS). Principal among these are the dynamic range and visibility constraints and requirements on LO power control, as well as the timescale constraints that link photocount rate, beat frequency, and vibrational frequency. These limitations were discussed in more detail in the section on laser vibrometry in Chapter 2.
As indicated in the preceding section, it is critical to maintain coherence between the wavefronts of the signal and the LO. While it may be possible to minimize the effects on the system side, the transmission medium may ultimately determine the performance of heterodyne detection. In addition to large fluctuations of attenuation produced by things like fog, rain, and smoke, inhomogeneities in the atmosphere itself may produce wavefront distortion. As a result of this wavefront distortion caused by air turbulence, a large portion of the light power can be converted into higher order modes and makes it
difficult to match the LO and return signal patterns. Therefore, heterodyne detection over large distances through the atmosphere is inherently difficult.
Unlike direct detection receivers, the dominant noise source in heterodyne or coherent receivers is the shot noise generated by the local oscillator beam. For a matched filter receiver, that effective noise is equal to one detected photon per resolution element (in both time and space).5 In order to efficiently contend with this inherent noise, the coherent detection system design is most efficient by ensuring that on the order of 1 (or a few) signal photon(s) is detected per angle/angle/range resolution cell per pulse. Below one detected signal photon per resolution element per pulse, the required transmitter power scales as the square root of the pulse repetition frequency (PRF). Therefore, if a 10 W, 100 Hz transmitter is the optical coherent design (giving ~1 photon per resolution element), then 100 W would be required for a higher PRF, 1 KHz system. This can lead to higher pulse energies and lower PRFs being the optimal energy efficiency solution for many measurement problems, which may not be as feasible as other designs, whether technologically or where low SWaP and/or high reliability are required. Direct detection receivers do not have this fundamental noise constraint and can have much lower than one-detected-noise-photon per resolution element per pulse.6
Another fundamental limit of heterodyne detection is the effect of speckle present in highly coherent light. As discussed, the LO and signal must be temporally coherent. They also need to be spatially coherent across the face of the detector and avoid having an additional linear phase difference, or they will produce spatial fringes across a detector, destroying the signal. In many usage scenarios the signal is reflected from optically rough surfaces, producing randomized phases and a “salt and pepper” intensity modulation on the image known as speckle,7 as also occurs in microwave synthetic-aperture radar, if the illumination source is sufficiently temporally coherent, as for a laser. Techniques such as averaging independent speckle realizations can be performed to reduce the speckle.8
Coherent detection techniques require narrow linewidth lasers with coherent illumination. The coherence length must be larger than the two-way round trip time of the pulse, or a sample of the outgoing signal must be stored until the signal return and used to develop the LO. In the second case the coherence length of the illuminating laser must still be longer than twice the depth of the target. A method of shifting the LO frequency is required for knowledge of the direction of the velocity. This is usually accomplished with an acousto-optical modulator. Larger, high bandwidth arrays are also important. Finally, adaptive optics may help reduce the effects of turbulence.
Published literature on the development of laser systems described above would be an indicator. Indicators of progress in heterodyne detector systems may also be found in literature and/or research about some of the applications of coherent ladar (vibrometry, spectroscopy, synthetic aperture ladar, etc.). Additional indicators may be noticed by work on high bandwidth arrays. For heterodyne approaches that do not have sensitive detectors, alternating current (AC) couple array work would also be an indicator.
There are a number of advantages to using heterodyne detection rather than direct detection. First, as mentioned above, a weak signal can be amplified by a strong LO and the signal-to-noise ratio with heterodyne detection depends only on the signal strength, the detector quantum efficiency, and the signal bandwidth. These results are true of both temporal and spatial heterodyne detection. This means that high gain detectors like those used in the direct detect 3-D ladars are not necessarily needed, and shot-noise-limited detection is possible with low-tech photodiodes.
Second, the heterodyne receiver provides high discrimination against background light and other radiation. Unlike direct detection, where background light causes problems when it is of the same order of magnitude as the signal power, in heterodyne detection the background light must be comparable to the LO power, which in many cases is made quite high and can be set to dominate the background light.
5 P. Gatt and S. Henderson, 2001, “ Laser radar detection statistics: A comparison of coherent and direct detection receivers,” Proc. SPIE, 4377: 251.
6 Personal communication from Sammy Henderson, President, Beyond Photonics, April 20, 2013.
7 C. Dainty, ed., 1984, Laser Speckle and Related Phenomena, Springer Verlag.
8 J.W. Goodman, 2007, Speckle Phenomena in Optic: Theory and Applications, Roberts & Co.
Furthermore, the coherent detection bandwidth can be controlled by a postdetection electronic filter that can be as narrow as desired.9 Heterodyne detection usually has much narrower receiver bandwidths than direct detection.
In temporal heterodyne detection, a third feature of coherent detection takes advantage of the fact that the amplified output occurs at the difference frequency between the LO and signal beams. This sensitivity to frequency difference makes it possible to measure the phase or frequency shift of the signals and hence obtain Doppler measurements for moving targets. This type of measurement is not directly possible with direct detect systems, which only measure intensity. It takes multiple range measurements to indirectly measure velocity using direct detection.
While heterodyne detection offers the potential for highly sensitive measurements, there are a number of practical limitations to the scheme. Coherent ladars are essentially interferometers. If the phases of the two beams are not well matched, the fringes will oscillate back and forth and the signal will be washed out. This feature lowers efficiency in temporal heterodyne. For heterodyne detection, the two fields from the transmitter and the LO must be spatially locked in phase at the detector. The two beams must be coincident and, to provide maximum signal-to-noise ratio, their diameters must be equal. The beams must propagate in the same direction and the wavefronts must have the same curvature. For spatial heterodyne the LO and return signal propagate in slightly different directions, causing spatial fringes to develop. Finally, for temporal heterodyne, the beams must be identically polarized, so that their electric vectors will be coincident.10 These requirements are called “coherent superposition,” and failure to meet them can cause a loss of signal reception.
A good way to deal with some of these constraints is to use a single laser for both the LO and signal and, for temporal heterodyne, coupling with an acousto-optic modulator to create the frequency difference between the two. In this configuration, the relative phase of the two beams is fairly stable even if the source does not exhibit low phase noise. In addition to the superposition requirements, the laser must be coherent and the coherent length must either be longer than the round trip distance or at least longer than the depth of the target as long as master oscillator drift is compensated by some technique, such as delaying a sample of the master oscillator to use as the local oscillator.
Heterodyne (or coherent) detection can be used in a number of applications such as coherent Doppler ladar measurements, vibrometry, spectroscopy, and very high resolution imaging techniques such as synthetic aperture ladar and inverse synthetic aperture ladar. Several of these applications will be discussed in more detail in later sections. Using a photon counting detector allows use of a low power local oscillator and potentially the ability to use the same detector for both coherent and direct detection.
Conclusion 3-1: Advantages of shot-noise-limited detection, high background discrimination, and measurement of phase or frequency shifts in addition to intensity make heterodyne detection a compelling and promising technology.
Conclusion 3-2: Heterodyne detection can be used with a weak local oscillator if detectors are already sensitive enough so that a strong local oscillator is not required as a method of increasing the receiver sensitivity.
According to Voxtel “Conventional optical imagers, including imaging. ladars, are limited in angle/angle spatial resolution by the diffraction limit of the telescope aperture. As the aperture size increases, the angle/angle resolution improves; as the range increases, spatial resolution degrades. Thus, high-resolution, real-beam imaging at long ranges requires large telescope
9 S. Jacobs, 1988, “Optical heterodyne (coherent) detection,” Am. J. Phys. 56(3): 235.
10 O.E. DeLange, 1968, “Optical heterodyne detection,” IEEE Spectrum 5(10): 77.
diameters. Imaging resolution is further dependent on wavelength, with longer wavelengths producing coarser angle/angle resolution. Thus, the limitations of diffraction are most apparent in the radio-frequency domain (as opposed to the optical domain).”11
Buell et al. describes “A technique known as synthetic-aperture radar (SAR) was invented in the 1950s to overcome this limitation: In simple terms, a large radar aperture is synthesized by processing the pulses emitted at different locations from a radar aperture as it moves, typically on an airplane or a satellite. The resulting image resolution is characteristic of significantly larger apertures. For example, the Canadian Radar Sat—II, which flies at an altitude of about 800 km, has an antenna size of 15 × 1.5 meters and operates at a wavelength of 5.6 cm. Its real-aperture resolution is on the order of 1 kilometer, while its synthetic-aperture resolution (with a transmission bandwidth of 100 MHz) is as fine as 3 m. This resolution enhancement is made possible by keeping track of the phase history of the radar signal as it travels to the target and returns from various scattering centers in the scene. The final synthetic-aperture radar image is reconstructed from many pulses transmitted and received during a synthetic-aperture evolution time using sophisticated signal processing techniques.12”
An alternative description of what is happening to create a high resolution synthetic-aperture ladar (SAL) image is that at a given instant a laser waveform is transmitted from either a monostatic or bistatic aperture. The laser light reflects off the target, and the return field is captured using either spatial or temporal heterodyne. To date, almost all SAL work has been done using temporal heterodyne, but the key issue is capturing a sample of the pupil plane field as large as the real receive aperture. At another location shortly later the same thing is done, and then again and again as the transmit and receive apertures move. If motion issues can be compensated, then a physically large representation of the pupil plane field is captured, which can then be Fourier transformed to form a high-resolution image. Since for monostatic operation both the transmitter and the receiver move, the synthesized pupil plane image is almost twice the distance flown.
In recent years, researchers have investigated ways to apply the techniques and processing tools of RF SARs to optical laser radars. According to Buell et al. “There are several motivations for developing such an approach in the optical or visible domain. The first is simply that humans are used to seeing the world at optical wavelengths. Optical SAL would potentially be easier than microwave radar for humans to interpret, even without specialized training. Second, optical wavelengths are around 20,000 times shorter than RF wavelengths, and can therefore provide much finer spatial resolution and/or much faster imaging times.”13 A typical synthetic aperture motion distance will be many kilometers for SAR but only meters for SAL, assuming the same target resolution requirement. Over time, new applications may arise for which additional resolution requirements impose longer motion distances on SAL.
The SAL concept is illustrated in Figure 3-5. This paragraph is drawn from Beck et al. “A platform with a transmitter-receiver module moves with velocity ν while it illuminates a target with light of mean wavelength λ and receives the scattered light. The imaging angular resolution in the direction of travel of the platform is given approximately by δx = λ/2Δθ, where the change in azimuth angle Δθ as seen by an observer at the target at range R is Δθ = DSA/R and the synthetic-aperture length developed during flight time T is DSA = ν × T”.14 The range resolution, in the orthogonal direction, is determined by the bandwidth, B, of the transmitted waveform range resolution δy = c/2B, so long as the receiver can measure the returned bandwidth. Coherent (heterodyne) detection is used to measure the phase history of the returned ladar signals throughout the synthetic-aperture formation time. According to Beck et al. “Two of the main types of synthetic aperture active sensors are (1) spotlight mode (Figure 3-5), in which
11 Voxtel, http://www.virtualacquisitionshowcase.com/document/602/brochure. Accessed on March 14, 2014.
12 W.F. Buell, N.J. Marechal, J.R. Buck, R.P. Dickinson, D. Kozlowski, T.J. Wright, and S.M. Beck, 2004 “Synthetic Aperture Imaging Ladar” Crosslink (Summer): 45-49.
14 S. Beck, J. Buck, W. Buell, R. Dickinson, D. Kozlowski, N. Marechal, and T. Wright, 2005 “Synthetic-aperture imaging laser radar: Laboratory demonstration and signal processing,” Appl. Opt. 44(35): 7621-7629.
FIGURE 3-5 Spotlight synthetic-aperture ladar (SAL). The illuminating spot size D spot, at the target is determined by the diffraction limit of the transceiver optic, with diameter Dt, corresponding to the imaging resolution of a conventional imager with the same aperture. The resolution in the direction of travel (azimuthal, δx) is determined by the wavelength and evolved aperture length. (For strip-map SAL, this length is limited by the spot illuminating spot size at the target.) The resolution in the orthogonal direction (range, δy) is determined by the transmitted waveform bandwidth, B. The angle Δθ is the angle subtended by the synthetic aperture as viewed from an image element at the target. To obtain the resolution in the ground plane, a simple rotation from the slant plane to the ground plane is performed. SOURCE: S.M. Beck, J.R. Buck, W.F. Buell, R.P. Dickinson, D.A. Kozlowski, N.J. Marechal, and T.J. Wright, 2005, “Synthetic aperture imaging laser radar: laboratory demonstration and signal processing,” Appl. Opt. 44(35): 7621.
the transmitted beam is held at one position on the target for the coherent dwell period and then moved to another spot, and (2) strip mode, in which the transmitted beam is continuously scanned across a target. Most of this discussion applies to either case. In strip mode, the aperture synthesis time is limited by the beamwidth of the sensor and the velocity of the platform (the time during which the target is illuminated). Smaller real apertures result in larger illuminating spots and concomitantly longer synthetic apertures, which leads to the nonintuitive (from a conventional imaging perspective) result that the azimuthal resolution in strip-mode SAL is half of the real-aperture diameter”—the smaller the transmitter aperture the better the resolution.15 Bistatic configurations, where the transmit and receive apertures are not collocated, can also be considered.
Figure 3-6 shows an early embodiment of a laboratory SAL demonstration system developed at The Aerospace Corporation based on wide-bandwidth frequency-modulated continuous-wave (FMCW) waveforms. According to Beck et al. “The components employed are all common, off-the-shelf, telecommunication fiber-based devices allowing a compact system to be assembled that can easily be isolated from environmental effects: The source is split into five paths, for target illumination, target—LO, reference, reference—LO, and wavelength reference. A circulator is used to recover the return pulse, which
FIGURE 3-6 Component layout for the fiber-based SAIL system. The components employed are all common, off-the-shelf, telecom, fiber-based devices, allowing a very compact system to be assembled that can be easily isolated from environmental effects. The source is split into five paths for the target illumination, target-local oscillator (LO), reference (REF), reference-local oscillator, and wavelength reference. A circulator is used to recover the return pulse, which is mixed with the target-local oscillator in a balanced heterodyne detector. The reference channel is delayed by a fiber loop and then mixed with the reference-local oscillator in a similar manner. The synthetic aperture is created by using a translation stage to scan the aperture across the target. A molecular wavelength reference cell (hydrogen cyanide -HCN) provides a pulse to pulse frequency absolute reference. SOURCE: from Walter Buell (variation of Figure 3 in S.M. Beck, J.R. Buck, W.F. Buell, R.P. Dickinson, D.A. Kozlowski, N.J. Marechal, and T.J. Wright, 2005, “Synthetic aperture imaging ladar: laboratory demonstration and signal processing,” Appl. Opt. 44(35): 7621).
is mixed with the target—LO in a balanced heterodyne detector. The reference channel is delayed by a fiber loop and then mixed with the reference—LO in a similar manner. The synthetic aperture is created by use of a translation stage to scan the aperture across the target.”16Figure 3-7 shows a typical laboratory image from the system of Figure 3-6 (see caption for details).
In 2003, DARPA initiated the Synthetic Aperture Lidar Tactical Imaging (SALTI) program with the aim of achieving high-resolution synthetic aperture lidar imagery from an airborne platform at tactical ranges, moving SAL from the laboratory to operationally relevant environments. The performance characteristics of the SALTI program are classified, but the system did achieve synthetic aperture resolution exceeding the real-aperture diffraction-limited resolution of the system. The program progressed through DARPA Phase 3 before being terminated in 2007.
Lockheed Martin Coherent Technologies (LMCT) also pursued an airborne SAL system17 and presented ground and airborne results in the Laser Sensing and Communications, (LS&C) meeting in Toronto in 2011 (Figure 3-8). While the RSAS SALTI system employed a fiber-based linear FMCW system like the Aerospace system, the LMCT system used a wide-bandwidth (7 GHz, 2 cm resolution), pulse-coded approach.
17 B. Krause, J. Buck, C. Ryan, D. Hwang, P. Kondratko, A. Malm, A. Gleason, and S. Ashby, 2011, “Synthetic aperture ladar flight demonstration,” in CLEO:2011, Laser Applications to Photonic Applications, OSA Technical Digest (CD) (Optical Society of America, 2011), PDPB7.
FIGURE 3-7 SAL-boat target and mosaicked SAL image results. The real-aperture diffraction-limited illuminating spot size is represented at the right. A picture of the target is shown at the left. This target consists of the same retroreflective material used for the triangle images, placed behind a transparency containing the negative of the sailboat image. The image was formed by scanning the target in overlapping strips and then pasting these images together to form a larger image. Some degradation is present due to the phase-screening effects of the transparency film; however, the pattern of the retroreflective material is clearly visible. The range to target in this example was ~2 m, with range diversity achieved by placing the target at a 45 degree angle with respect to the incident light.
SOURCE: S.M. Beck et al. op. cit.
FIGURE 3-8 SAL demonstration images. (a) Photograph of the target. (b) SAL image, no corner cube glints. Cross range resolution = 3.3 cm, 30× improvement over the spot size. Total synthetic aperture = 1.7 m, divided into 10 cm subapertures and incoherently averaged to reduce speckle noise. (c) SAL image with corner cube glint references for clean phase error measurement. Cross range resolution = 2.5 cm, 40× improvement over the spot size. Total synthetic aperture = 5.3 m, divided into 10 cm subapertures and incoherently averaged to reduce speckle noise.
SOURCE: B. Krause, J. Buck, C. Ryan, D. Hwang, P. Kondratko, A. Malm, A. Gleason, and S. Ashby, 2011, “Synthetic aperture ladar flight demonstration,” in CLEO:2011 -Laser Applications to Photonic Applications, OSA Technical Digest (CD) (Optical Society of America, 2011), PDPB7.
The range resolution of any ladar, including SAL, is limited by the transmitted waveform bandwidth, assuming the receiver can capture the returned signal at this bandwidth. This limits performance from technologies that cannot achieve significant transmitter bandwidth (such as CO2 lasers as employed in the Northrop-Grumman (NGES) approach to the SALTI program). In the absence of bandwidth, angular resolution in the cross-motion direction is limited by the size of the real aperture, usually in elevation. The diffraction limit in this dimension can be expanded by a factor of almost two using techniques described in the MIMO section. The along-track angular resolution is limited by the size of the synthetic aperture:
where D is the real aperture diameter in the along-track direction and L is the distance moved in the along-track direction. In microwave SAR the size of the real aperture is neglected.
The next major limitation is on range performance (how far away the objected viewed can be, not the range resolution), which is constrained by both atmospheric transmission and the modest aperture sizes that aperture synthesis enables. The effect of the atmosphere is discussed further in a later section.
For example, consider the problem of achieving high spatial resolution at low elevation angle through long atmospheric slant paths. At shorter wavelengths, the diffraction-limited resolution is very good, but the propagation through the atmosphere, both from molecular scattering and aerosols, is quite limited. Range can be increased at longer wavelengths where the scattering attenuation is reduced, but at the expense of spatial resolution, until aperture sizes become impractical. Synthetic aperture techniques offer a solution to this dilemma by enabling operation at longer wavelengths, using aperture synthesis to achieve high resolution in the along-track direction with modest aperture sizes.
Of course the SWaP advantages of modest aperture sizes come at the price of not collecting as many photons on receive from each image resolution element, making synthetic aperture ladar a rather power-hungry technique, placing a premium on laser and detector efficiency, and driving laser power requirements. This is a fundamental consideration that whenever resolution is increased, it means photons are scattered from a smaller resolution cell. With a monolithic real aperture, the smaller resolution cell is accompanied by a proportional increase in aperture receiver collection area. In this case, the increase in resolution is not accompanied by a proportional increase in the aperture receiver collection area.
An engineering challenge for SAL is the ability to do motion compensation. The full sample of the field in the large pupil plane synthetic aperture is only available to extent that any movement while the pupil plane field is being collected has been removed. The quality of the motion compensation can limit the SAL resolution.
Atmospheric turbulence is expected to be a limiting phenomenon for SAL imaging at long range. A detailed model of the impact of refractive turbulence on image formation has been presented by Karr18 based on earlier treatments by Fried.19 A limited discussion of the effects of atmospheric turbulence on ladars is given in Chapter 4.
As a coherent ladar technique, the technology burdens on the receiver are relatively modest. Deramp-on-receive systems—linear frequency modulated (FM) systems with a chirped LO—have modest detector bandwidth requirements. They do not require the full bandwidth of the transmitted waveform, but just enough bandwidth to accommodate the range depth of the target. This is sometimes called stretch processing, with the LO chirping along with the return signal to reduce required detector bandwidth. Because the image formation is through synthetic aperture processing of the received signal, in principle only a single detector is required. If however only one detector is used, the area imaged would be limited to the diffraction limit of the real beam. This is not a problem in SAR, since real beam microwave resolution is so poor, but it would severely limit the area covered for SAL. In practical systems covering
18 T.J. Karr, 2003, “Synthetic aperture ladar resolution through turbulence,” Proc. SPIE 4976: 22.
19 D.L. Fried, 1966, “Optical resolution through a randomly inhomogeneous medium for very long and very short exposures,” Journal of the Optical Society of America, 56 (10): 1372.
TABLE 3-1 Required SAL Laser Power for Two Sets of Modeling Assumptions: Airborne System and Spaceborne System
NOTE: D, real aperture diameter in along-track direction; L, distance moved in along-track direction; H, Height of the platform; DF, duty factor; B, bandwidth; P, Power.
SOURCE: W. Buell, N. Marechal, D. Kozlowski, R. Dickinson, S. Beck, 2002, “SAIL: Synthetic aperture imaging ladar,” Meeting of the MSS Specialty Group on Active E-O Systems, Vol. I, C15.
useful areas, typically one uses a modest array of photodetectors to build up multiple real-aperture spots to form an image of a moderate size area. The Raytheon SALTI system used a 1-D array of p-doped-intrinsic-n-doped (p-i-n) photodiodes and scanned that 1-D array. Synthetic aperture processing was performed on each detector separately. Although with sufficient LO power, coherent detection can reach the shot noise limit, there is still a premium on low-noise receivers in order to reduce the required LO power, especially for large arrays. Besides the detectors themselves, the optics of a coherent ladar system, particularly ones with an array of detector elements, can be quite challenging, since the wavefront overlap must be optimized for good heterodyne efficiency.
The requirements on the transmitter are more challenging. As noted above, the modest aperture size that SAL enables means that a smaller fraction of the return light is captured, which must be made up by increased transmitter power. Assuming resolution requirements and aperture diameter are held fixed, the required laser power for a SAL system scales as R3. Some example link budgets for notional SAL systems20 are reproduced in Table 3-1. In addition to the transmitter power requirements, SAL also places stringent requirements on local oscillator phase stability. The LO must remain phase coherent over the round trip of the range to target, unless a sample of the transmitted waveform is stored for use in beating against the return signal. One storage method to reduce coherence length requirements is to input a sample of the master oscillator into a long fiber delay and use the output of that fiber as an LO rather than creating the LO from the master oscillator as it exists when the signal is returned from the target. Finally, although less stringent than the LO phase stability requirements, the transmitter waveform quality can be challenging. For a linear FM system, the chirp must be linear to within, say, 20 degrees of phase error over the chirp. This requirement need not be levied entirely on the hardware, however. If transmit waveform phase errors can be monitored, they can be compensated for in the SAL processing. This is the approach taken in the early work at Aerospace. Later work by Bridger Photonics and the University of Montana21 employing similarly wide bandwidth chirps has achieved the required linearity through sophisticated laser frequency and phase stabilization techniques, simplifying the required processing. The
20 W. Buell, N. Marechal, D. Kozlowski, R. Dickinson, and S. Beck, 2002, “SAIL: Synthetic aperture imaging ladar,” Meeting of the MSS Specialty Group on Active E-O Systems, Vol. I, C15.
21 S. Crouch and Z.W. Barber, 2012, “Laboratory demonstrations of advanced synthetic aperture ladar techniques,” Optics Express, 20(22): 24237.
Bridger system has demonstrated resolutions of a few microns in laboratory settings and centimeter resolution in long-range field tests.
Pointing control on SAL systems is not overly stressing, as the transmitter illuminates an area large compared to the SAL spatial resolution. Pointing requirements are simply that the illumination covers the desired target area. Pointing knowledge is a much tighter requirement to a fraction of resolution element.
As noted above, several contractors have developed and demonstrated synthetic aperture ladar systems using a variety of technological approaches and with a significant range of performance parameters. The other country that has demonstrated significant interest in SAL imaging has been China, with a series of papers since 2004 addressing both hardware demonstrations22 (based largely on U.S. published results) and theoretical analyses, including an advanced algorithm for atmospheric compensation.23
It is reasonable to expect that the advantages of SAL and related advanced coherent active imaging techniques will drive the research, development, and deployment of such systems in a variety of countries. This will drive development of high power coherent laser systems capable of achieving wide bandwidth waveforms, as well as long coherence length LOs. Advances in modest size low-noise coherent receiver arrays and techniques for improving heterodyne mixing efficiency over array detectors can also be expected.
A closely related technology to SAL is inverse synthetic aperture ladar (ISAL), in which the transceiver is stationary and the target moves and/or changes aspect relative to the transceiver. This technology is of significant interest for ground-based imaging of space objects, including GEO satellites. ISAL has been the subject of research programs in the United States, such as the DARPA LongView program24 and research at AFRL.25 It has also been the subject of research in China.26,27 It is natural to expect that organizations researching SAL would also be researching ISAL. From the information available in the open literature, the technological developments in China along these lines are not as advanced as those in the United States, but there is ample evidence of interest in continued development.
Conclusion 3-3: Synthetic aperture ladar enables high-resolution active imaging at long range with modest size receiver optics. A synthetic aperture ladar has the potential to provide long-range, easily interpretable (person friendly) imagery because optical systems tend to use mostly diffuse scattering from the viewed object.
Conclusion 3-4: Significant foreign interest in synthetic aperture ladar technology has been demonstrated.
22 W. Jin, 2010, “Matched filter in synthetic aperture ladar imaging,” Acta Optica Sinica 2010-07.
23 L. Guo, M. Xing, Y. Tang, and J. Dan, 2008, “A novel modified omega-k algorithm for synthetic aperture imaging lidar through the atmosphere,” Sensors 8: 3056.
25 C.J. Pellizzari, C.L. Matson, and R. Gudimetla, 2011, “Inverse synthetic aperture LADAR for geosynchronous space objects—Signal-to-noise analysis,” Proceedings of the Advanced Maui Optical and Space Surveillance Technologies Conference, Wailea, Maui, Hawaii, September 13-16, S. Ryan, ed.
26 X. Zhao, X. Zeng, C. Cao, Z. Feng, and C. Fu, 2009, “Research on inverse synthetic aperture ladar,” Proc. SPIE, 7382, International Symposium on Photoelectronic Detection and Imaging 2009: Laser Sensing and Imaging, August 27.
27 G. Liang, 2009, “Study on experiment and algorithm of synthetic aperture imaging lidar,” Ph.D. dissertation, Xi’an University of Electronic Science and Technology.
FIGURE 3-9 Diagram of off-axis holography experiment.
SOURCE: J. Marron, Raytheon, Space and Airborne Systems, “3D Holographic Laser Radar,” presentation to the committee, March 5, 2013.
In summary, synthetic aperture ladar is becoming an increasingly mature coherent ladar technology that offers considerable system-level and performance advantages over real-aperture imaging ladar systems. The technology base continues to evolve, enabling improved angular resolution and increased standoff range. It is reasonable to expect that this technology will proliferate to other countries and initial indicators of this can be seen.
Digital holography, also referred to as spatial heterodyne detection, utilizes two light beams with a spatial/angular difference (as opposed to frequency difference) between them. In digital holography, the return signal from an illuminated object is coherently interfered with a reference beam. The reference beam could be a glint/retroreflector in close physical proximity to the object of interest or a local oscillator signal that transverses a similar optical path length to the return from the object in order to maintain coherence properties between the two optical beams. Figure 3-9 depicts a typical holographic arrangement using a reference or local oscillator beam. The laser is split into two beam paths; the transmitter path illuminates the target and the reflection from the target is interfered using a beam splitter with the reference beam on a detector focal plane array (FPA). The FPA for spatial heterodyne can be a framing array; it does not have to have high bandwidth, because only the fringes across the array are being detected, not any high bandwidth signals.
The interference between the object and reference beams is recorded on the charge-coupled device (CCD) detector. Although detectors record only intensity and do not directly preserve the phase profile of the electric field, digital holography provides a means to extract the spatial phase variation across an optical aperture using the spatial beat frequency between the signal and the LO. As previously discussed in the section on synthetic aperture ladar, having access to both the amplitude and phase of the optical field enables capabilities not readily possible with intensity-only imaging; with digital post-processing, the exact electric field at any point can be calculated from spatial heterodyne phase extraction. This can allow for digital refocusing and 3-D imaging.
When the scenario or requirements allow a retroreflector to be placed in proximity to the target, the coherence requirements on the system may be lessened. In addition, aberrations imparted by turbulence in the atmosphere are nearly identical if the retroreflector is within the same isoplanatic patch as the target, meaning that the effects of turbulence will be minimized. Early long-range holography experiments were performed at up to 12 km.28
In heterodyne detection, the image term is proportional to the strength of the LO component multiplied by the strength of the object component; an extremely weak image signal can be magnified by
28 J.W. Goodman, D.W. Jackson, M. Lehmann, and J. Knotts, 1969,” Experiments in long-distance holographic imagery,” Appl Optics 8: 1581.
a strong reference or LO signal. This can be strongly advantageous when extracting the image component of interest. An important feature of a strong LO arrangement is that the object’s autocorrelation term will be extremely weak and negligible compared to the strength of the image term, allowing the use of lower spatial frequencies while still achieving separation between image terms.29 The maximum strength of the local oscillator will be limited by the electron well capacity of the detector pixel. Detector saturation occurs when the well capacity is exceeded.
Techniques using this technology have several different names. In additional to digital holography, spatial heterodyne detection and lensless imaging, scientists and engineers in the field also refer to this technology as “holographic aperture ladar”30 and “spatially processed image detection and ranging (SPIDAR).”31
In addition to performing single-wavelength digital holography, multiple-wavelength holography is a subset that allows for fine-resolution 3-D imaging. Figure 3-10 shows results using multiple-wavelength digital holography. As shown in Figure 3-10 (b) and (c), active imaging provides intensity and phase information about the object, giving both shading and, with multiple wavelengths, depth information.
By utilizing two wavelengths, a contour map of the object surface can be generated by recording two holograms at two wavelengths; a fringe pattern is generated by reconstructing and superimposing the holograms.32 Alternatively, using a tunable laser source, a series of holograms are recorded for a set of laser frequencies.33 The collected dataset has of coordinates of (spatial frequency, spatial frequency, laser frequency). By Fourier transforming the hologram data, the resulting image has coordinates of (angle, angle, range). The range resolution ΔRres of the system is given by:34
where Δνtot is the total frequency bandwidth over which the source is tuned. Additionally, the range ambiguity interval is given by ΔRunamb = c /(2 Δνinc) where Δνinc is the frequency sampling increment for the laser. Using multiple-wavelength holography, it is possible to measure a large range with high depth resolution without 2π ambiguities of the phase difference that can occur with the two-wavelength version.
Digital holographic EO sensing has military applications in intelligence, surveillance, and reconnaissance (ISR), target tracking, target identification, and directed energy. Beyond these specific applications, digital holographic EO sensing is more broadly used in biomedical imaging/microscopy, imaging through scattering media, horizontal path imaging, and 3-D imaging for commercial and entertainment purposes. A broad overview of the latest advancements in digital holography was published recently35 and articles therein discuss research topics across a wide variety of applications.
Digital holography is a sensing technology. However, the outcomes of spatial heterodyne detection may influence the overall system architecture for a larger system, such as a directed energy system, which needs to transmit a laser beam. Using digital holography, phase aberrations present in the atmosphere between the object and sensor can be estimated. Electro-optic phase modulators, liquid crystal spatial light modulators, or piezo mirrors can pre-distort the outgoing illumination beam to compensate
29 J.C. Marron, 2009, “Photon noise in digital holographic detection,” AFRL-RD-PS-TP-2009-1006.
30 B.D. Duncan and M.P. Dierking, 2009,” Holographic aperture ladar,” Appl. Opt. 48: 1168.
31 J.C. Marron, 2008, “Spatially processed image detection and ranging (SPIDAR),” in IEEE LEOS Meeting (IEEE 2008), 509.
32 B. Hildebrand and K. Haines, 1967, “Multiple-wavelength and multiple-source holography applied to contour generation,” J. Opt. Soc. Am. 57: 155.
33 A. Wada, M. Kato, and Y. Ishii, 2008, “Multiple-wavelength digital holographic interferometry using tunable laser diodes,” Appl. Optics 47: 2053.
34 J.C. Marron and K.S. Schroeder, 1993, “Holographic laser-radar,” Opt. Lett. 18: 385.
35 M.K. Kim, Y. Hayasaki, P. Picart, and J. Rosen, 2013, “Digital holography and 3D imaging: introduction to feature issue,” Appl Optics 52: Dh1-Dh1.
FIGURE 3-10 (a) Passive broadband image of a mannequin taken at a 100 m range using 1.0 to 1.65 micron light; (b) active image of same object using 1.6 µm illumination; (c) 3-D phase difference acquired through active illumination. SOURCE: J.C. Marron, R.L. Kendrick, S.T. Thurman, N.L. Seldomridge, T.D. Grow, C.W. Embry, and A.T. Bratcher, 2010, “Extended-range digital holographic imaging,” Proc. SPIE, Laser Radar Technology and Applications XV, 7684, 76841J.
for these aberrations. As a result, digital holography may influence the entire system architecture, from how outgoing beams are transmitted to how return beams are received.
For the current horizontal path imaging, published results show an extended range of 1.5 km and voxel dimensions of 3 cm. Work was performed at 1.6 µm using an Er:YAG laser. The detector array was a commercial InGaAs array with 640 × 512 pixels. Images were taken of a truck, a mannequin, a resolution chart, a model missile, and some calibration blocks.
Spatial heterodyning is an active, emerging technology. Work has been performed in research laboratories as well as field test demonstrations.36 Trends in this area have recently focused on variations of aperture and focal plane array arrangements, discussed more in the next section on multiple-input, multiple output (MIMO) receiver and transmitter geometries. Digital holography was first performed using a single focal plane array. In order to increase the aperture size, current work is geared toward optimizing this technology for multiple apertures (with gaps between adjacent apertures)37,38 and synthetic apertures (combining multiple apertures together to form a zero-gap, larger full aperture).39,40,41,42 Proper phasing between individual arrays increases the complexity of these multiaperture apertures compared to a single FPA; this is a current area of research.43
The maximum angular resolution is set by the full effective FPA width when the image is sampled in the pupil plane. Therefore, a larger effective FPA size will result in finer angular resolution for a given wavelength and imaging distance. Utilizing synthetic aperture techniques with spatial heterodyning can further increase the effective array size. Optical magnification can also improve the
36 J.C. Marron, R.L. Kendrick, N. Seldomridge, T.D. Grow, and T.A. Hoft, 2009, “Atmospheric turbulence correction using digital holographic detection: Experimental results,” Opt. Exp. 17: 11638.
37 J.W. Haus, N.J. Miller, P. McManamon, and D. Shemano, 2011, “Digital holography for coherent imaging for multi-aperture laser radar,” Conference paper, Digital Holography and Three-Dimensional Imaging, Tokyo Japan, May 9-11, Optical Society of America, p. DMA3.
38 R.L. Kendrick, J.C. Marron, and R. Benson, 2009, “Anisoplanatic wavefront error estimation using coherent imaging,” in Coherent Laser Radar Conference (Toulouse, France), 205.
39 A.E. Tippie, A. Kumar, and J.R. Fienup, 2011, “High-resolution synthetic-aperture digital holography with digital phase and pupil correction,” Opt. Exp. 19: 12027.
40 J.H. Massig, 2002, “Digital off-axis holography with a synthetic aperture,” Opt. Lett. 27: 2179.
41 D. Claus, 2010, “High resolution digital holographic synthetic aperture applied to deformation measurement and extended depth of field method,” Appl. Opt. 49: 3187.
42 R. Binet, J. Colineau, and J.C. Lehureau, 2002, “Short-range synthetic aperture imaging at 633 nm by digital holography,” Appl. Opt. 41: 4775.
43 B.K. Gunturk, N.J. Miller, and E.A. Watson, 2012, “Camera phasing in multi-aperture coherent imaging,” Opt Express 20: 11796.
angular resolution by effectively expanding the size of the FPA. The field of view of the system will be determined by the pixel pitch of the detector elements. The smaller the pixel pitch, the larger the field of view for pupil plane based imaging. As with many optical systems, the push is still toward larger and larger arrays and smaller and smaller pixels. The roles of pixel pitch and FPA arrays size are reversed when the FPA is placed in the image plane.
Laser requirements are an important consideration for digital holographic imaging. This technology requires coherent illumination with narrow linewidths in order to record the interference between the local oscillator and object return signal. Narrow linewidths and coherence length requirements are inherent issues for all types of coherent imaging; the section on synthetic aperture ladar previously discussed provides details regarding these issues. The narrow linewidth required may be considered a constraint of the technology, as the current cost of these lasers is significant.
Another fundamental drawback of spatial heterodyning is the speckle effects inherently present in highly coherent light. Speckle effectively reduces the angular resolution from the theoretical limit;44 techniques such as averaging independent speckle realizations can be performed to reduce the speckle.45 Independent speckle realizations require multiple exposures and/or dividing the full aperture into subapertures, thus increasing the acquisition and processing time.
Atmospheric turbulence may ultimately be an external fundamental limit affecting this technology. Although digital holography allows for phase aberration correction, it may be limited over a small isoplanatic patch if the turbulence is severe enough and evenly distributed throughout the imaging path. 46
As an emerging technology, significant indicators of technology development will most likely be seen through research funding and publications in universities, research institutions, government laboratories, or private industry. Demonstration projects are most likely; production scaling would indicate significant progress and acceptance of digital holography. The further development of digital holography for commercial applications such as 3-D technology for entertainment could drive this technology forward as well.
Spatial heterodyne has two main advantages over both passive imaging systems and many other forms of ladar systems. First, spatial heterodyning is potentially a “lensless” imaging technique. Unlike conventional imaging techniques that use lenses and other optics to form the conjugate of the image on the detector plane, spatially heterodyne systems can map the pupil plane directly onto the detector. The image is then digitally converted from the recorded detection. From a practical standpoint, SWaP may be reduced compared to these other conventional systems since the weight and volume of the optics/lenses can be removed. As the desire to create larger and larger focal planes to obtain higher imaging angular resolution increases, the weight and volume of the corresponding optics for a focal plane imaging system also increases significantly. Using a spatial heterodyne imaging modality, the focal plane aperture area can continue to increase with less burden on SWaP requirements.
If the receive aperture is not a 1:1 ratio with the FPA, a telescope is required to adjust the size of the pupil plane. However, in synthetic aperture digital holography, multiple smaller telescopes may be used, still allowing for a reduction in volume and weight compared to a single (longer) focal length telescope for the entire FPA.
Atmospheric turbulence can severely degrade both the outgoing illumination beam as well as the return signal, resulting in the need for higher laser power from the increased beam divergence and reduced angular resolution and degraded imagery. Even as the aperture size of the imaging system is increased for higher angular resolution capabilities, atmospheric turbulence effects can undermine and limit the achievable angular resolution. In astronomy good seeing conditions for telescopes, defined by the coherence area, the area over which the incoming light is considered to be spatially correlated, is
44 A. Kozma, and C.R. Christensen, 1976, “Effects of speckle on resolution,” J Opt Soc Am 66: 1257.
45 J.W. Goodman, 2007, Speckle Phenomena in Optics: Theory and Applications, Roberts & Co.
46 A.E. Tippie and J.R. Fienup, 2010, “Multiple-plane anisoplanatic phase correction in a laboratory digital holography experiment,” Opt. Lett. 35: 3291.
typically on the order of 10 cm. For ground-to-ground or horizontal path imaging scenarios, the coherence area is even smaller. Adaptive optics using wavefront sensors and deformable mirrors are one way astronomers combat the turbulence problem while at the same time adding weight and complexity to the complete imaging system. Digital holographic techniques can address some of these atmospheric turbulence effects directly; initial results are presented in Marron et al.47 and Tippie and Fienup.48
The fundamental limits associated with digital holography—laser coherence requirements, speckle effects, and degradation in imaging due to atmospheric turbulence—would be factors that may be seen as downsides to implementing this technology.
Since pupil-plane spatial heterodyne does not directly acquire focal plane images on the detector array, processing is required to extract the desired information. In the case of optimizing for aberration correction and/or proper phasing of multiapertures, additional computation is required. As the focal plane size continues to increase, the computation burden will scale as well. Dedicated hardware and parallelization of processes will be required to reduce the processing time as much as possible. Parallel implementation or use of graphics cards may reduce the computation time for the image reconstruction process. As developments in processing continue, the burden of postprocessing for image correction will lessen.
As an emerging technology, there is strong possibility for digital holography to grow. As mentioned previously, digital holography has applicability in intelligence, surveillance and reconnaissance (ISR), target tracking, target identification, and directed energy. As the technology continues to advance, digital holographic systems will likely be designed with these applications in mind. Advancements in narrow laser linewidths and laser power will have direct impact on the ranging capabilities of this technology, as longer and longer ranges are desired.
If digital holography is implemented as “lensless imaging,” one possible future capability would be the use of digital holography with conformal FPAs. An array of conformal focal planes could match the shape of the surface of the designated platform (vehicle, plane, etc.), enabling more flexibility in design, as well as the collection and capture of returned light. However, many serious technical issues such as the conformal focal planes technology itself, as well as analysis and implementation of local oscillator illumination of a curved surface, would need to be researched and proven before such designs could be considered.
In the broad field of digital holography, researchers worldwide are actively involved in this field of research. Key countries outside the United States include France, Germany, Israel, Japan, and China.49 While the United States may be considered a leader in digital holography for remote sensing applications, researchers in Pacific Rim countries continue to make significant progress in holography for commercial applications for 3-D imaging and display.
In summary, current trends in spatial heterodyning have focused on variations of aperture and FPA arrangements using multiple apertures (with gaps between adjacent apertures) and using motion to form synthetic apertures (combining multiple apertures sampled at different times to form a zero-gap, larger full aperture).
Conclusion 3-5: Digital holography/spatial heterodyne is a growing segment of active EO sensing, evidenced by conferences being held across the world that cover the diverse applications of this technical area.
47 J.C. Marron, R.L. Kendrick, N. Seldomridge, T.D. Grow, and T.A. Hoft, 2009, “Atmospheric turbulence correction using digital holographic detection: Experimental results,” Opt. Exp. 17: 11638.
48 A.E. Tippie and J.R. Fienup, 2010, “Multiple-plane anisoplanatic phase correction in a laboratory digital holography experiment,” Opt. Lett. 35: 3291.
49 See, for example, H. Luo, X.H. Yuan, and Y. Zeng, 2013, “Range accuracy of photon heterodyne detection with laser pulse based on Geiger-mode APD,” Optics Express 21(16): 18983.
Using multiple transmitter apertures and multiple receive apertures allows design flexibility not present in monolithic apertures. SAR is comprised of a single moving aperture, so the transmit and receive apertures both move.50,51,52,53 A SAL also uses motion of the single aperture to synthesize a larger effective aperture. For both SAR and SAL, it is as though the flown distance is a synthetic aperture almost twice as large as the actual flown distance, because the angle of incidence is equal to the angle of reflection. This has been experimentally demonstrated over decades in the microwave region and, recently in the optical regime, when SAL was demonstrated.54,55 Duncan56 provides a crossrange resolution equation for spotlight-mode synthetic aperture ladar:
where D is the size of the aperture, R is the distance to the object, λ is the wavelength, and L is the distance moved. As a result, the effective aperture from a diffraction point of view is
For RF systems, the real aperture is tiny (on the order of meters) compared to the distance moved in a SAR (likely multiple kilometers), so the size of the real aperture is neglected, making the effective aperture twice as large as a monolithic aperture of width equal to the distance flown. In an EO system, the real aperture can be a significant fraction of the size of the moved aperture, so the Dreal term is retained.
Instead of using motion to synthesize a larger effective aperture, MIMO active EO sensing uses multiple physical subapertures to create a larger effective aperture. An array of receive-only subapertures can synthesize an effective aperture as large as the receive array if the field across the array can be measured or estimated. With an array of transmit and receive subapertures, even more flexibility is obtained, so long as it is possible on receive to identify which transmitter each photon initially came from.
One effect of multiple transmit and receive subapertures is increased angular resolution similar to the angular resolution increase from motion based synthetic aperture sensors. Instead of motion, an array of n subapertures that both transmit and receive can be used. For nine subapertures in a row, the array will have a diffraction limit consistent with a monolithic aperture that is 1.89 times as large in diameter as the array. This is because eight subapertures are equivalent to the distance moved, L, while one subaperture is equivalent to the real aperture, Dreal. If transmission occurs from one subaperture in the middle of an array, the receive aperture array is effectively in its normal location. If the transmit beam is moved up one subaperture, it is as though the receive aperture were moved down one subaperture. If this process is continued, the result is something like what is shown in Figure 3-11, where the lighter color linear arrays indicate the perceived location of the linear receive array, depending on which transmit subaperture is used. The dark color column shows the actual location of the arrays. The full extent of the linear arrays is
50 M.I. Skolnik, 1990, Radar Handbook (2nd ed.) McGraw-Hill, New York. Chapter 17, by Roger Sullivan Eq. 17.1 and 17.2, Figure 17.2.
51 M. Soumekh, 1999, Synthetic Aperture Radar Signal Processing with Matlab Algorithms, Wiley, New York, Section 2.6, Cross Range Resolution.
52 M.A. Richards, 2005, Fundamentals of Radar Signal Processing, McGraw-Hill, New York. Chapter 8.
53 M.I. Skolnik, 1980, Introduction to Radar Systems (2nd ed.), McGraw-Hill, New York. Chapter 14.
54 B. Krause et al., 2011, “Synthetic aperture ladar flight demonstration,” Conference on Lasers and Electro-Optics, PDPB7.
55 S.M. Beck et al., 2005, “Synthetic-aperture imaging laser radar: Laboratory demonstration and signal processing,” Appl. Opt. 44: 7621.
56 B.D. Duncan and M.P. Dierking, 2009, “Holographic aperture ladar,” Applied Optics 48(6): 1168.
FIGURE 3-11 Effective receiver aperture placement based on transmit subaperture utilized. Dark color shows actual location of the arrays. Light color shows perceived location. The number on the left shows how many receive subapertures are perceived to be at that location.
1.89 times larger, but it is sampled more near the middle of the effective aperture, represented by eight samples in the middle down to one on either end. This is similar to an apodized aperture.
The potential increased angular resolution from the arrays of transmit and receive subapertures is just one facet provided by MIMO imaging. If tagged transmitters are spaced at distances less than the size of a receive subaperture, it provides sampling that can allow closed-form solution to differences in atmospheric path across the subaperture array.57 An example of the resulting effective receiver pupil using illuminator diversity is shown in Figure 3-12. Because of the overlap, one can solve in a closed-form manner for the phase between subapertures, allowing its use to compensate for atmospheric phase disturbances in the pupil plane.
MIMO imaging systems are very new, and all of their uses have probably not yet been discovered. Multiple transmitters of course can allow more rapid compensation for speckle because more realizations of speckle can be gathered rapidly.
MIMO techniques as described here can be implemented using either temporal or spatial heterodyne (digital holography) techniques. It will, however, be much easier to tag the emitted transmitter signals, allowing simultaneous transmission, if high bandwidth temporal heterodyne is used, since high bandwidth tagging schemes can then be used. RF MIMO techniques that use multiple simultaneous phase centers have been developed by Coutts et al.58 Tyler talks about using transmitter diversity to phase up laser beams on transmit.59 This could be used for an illuminator.
The angular resolution of an array of subapertures can be almost twice the resolution of the diffraction limit for a monolithic aperture. In addition, atmospheric turbulence between the object imaged and the sensor can be very quickly and accurately compensated. This compensation is relatively straightforward for turbulence in the pupil plane but will be more difficult for volume turbulence. These are narrow band sensors, but using multiple transmitters allows speckle mitigation. A significant
57 J.R. Fienup, 2000, “Phase error correction for synthetic-aperture phased-array imaging systems,” Proc. SPIE 4123-06: 47, Image Reconstruction from Incomplete Data, San Diego, Calif.
58 S. Coutts, K. Cuomo, J. McHarg, F. Robey, and D. Weikle, 2006, “Distributed coherent aperture measurements for next generation BMD radar,” Fourth IEEE Workshop on Sensor Array and Multichannel Processing.
59 G. Tyler, “Accommodation of speckle in object-based phasing,” 2012, J. Opt. Soc. Am. A 29(4).
FIGURE 3-12 Multiple subaperture overlap using three transmitters in a pattern about half a receive subaperture diameter apart. When images are adjusted to account for illuminator location, the receiver subaperture pupils overlap as shown.
SOURCE: D.J. Rabb, J.W. Stafford, and D.F. Jameson, 2011 “Non-iterative aberration correction of a multiple transmitter system,” Optics Express 19(25): 25048D.
limitation is that the received signal is captured in a smaller receive aperture area. The required laser power will increase by the ratio of the area of the monolithic aperture to the received aperture array area. Also, if the angular resolution is greater than a monolithic subaperture, less laser return will come from each image voxel.
Arrays of high temporal bandwidth detectors will be very helpful in implementing MIMO techniques for EO imaging. The papers cited sequenced through the multiple transmitters because they implemented MIMO using a digital holography/spatial heterodyne approach to imaging, using low bandwidth framing detector arrays. If high-bandwidth detector arrays and a temporal heterodyne approach to imaging are used, it should be possible to simultaneously emit multiple tagged transmitter beams and to have each receiver be able to distinguish the transmit aperture any photon came from. High-bandwidth detectors will allow high-bandwidth modulations to be imposed on each transmitted beam, and sorted on receive. Temporal heterodyne arrays will need to be AC-coupled, or have high dynamic range, or have high sensitivity such that temporal heterodyne can be implemented using a relatively weak LO.
A second technical hurdle to overcome is volume turbulence. Techniques for calculating and compensating for volume turbulence still need to be developed.
Published work in multiple aperture array active sensors systems using transmitter as well as receiver diversity would be one indicator of active interest in this area. Work in high-bandwidth detector arrays suitable for use in temporal heterodyne sensors would be another indicator.
MIMO technology will allow imaging with high angular resolution using much lighter and more compact aperture arrays than a monolithic aperture. An array of small subapertures can be much thinner and lighter than a monolithic aperture. Also, a MIMO approach can achieve almost twice the diffraction limited angular resolution of a monolithic aperture. Speckle averaging using multiple transmitters will be another advantage.
To be really useful in freezing the atmosphere, MIMO should be implemented using high-bandwidth detector arrays, which still need to be further developed. There will be a digital implementation requirement to conduct the required calculations. There will also be a need to have many different optical trains, complicating the optical system. Also, narrow line lasers will need to be used to
do either spatial or temporal heterodyne. Higher power lasers will be required to image a given area using a MIMO array compared with imaging with a monolithic aperture
This technology is well suited for long-range imaging applications from air or space. MIMO could be used in the cross-range dimension along with motion-based synthetic aperture imaging.
The United States appears to have a lead in this technology so far, but the developments to date have not been developments requiring large investments or significant infrastructure, so this lead could evaporate quickly. The United States also has a lead in high-bandwidth FPAs that can be used in MIMO applications. High-bandwidth FPAs are a more enduring lead in terms of infrastructure required to produce them, so that can help preserve the U.S. lead in this area.
In summary, MIMO approaches for active EO sensing can, at a minimum, increase the effective diameter of an aperture array by a factor of almost two and can allow multiple subapertures on receive to be phased using a closed-loop calculation of the phase difference between subapertures. This can compensate for atmospheric turbulence at least at some locations between the sensor and the imaged object.
Conclusion 3-6: The multiple input, multiple output approach is a very promising active EO research area. At a minimum, it will be very valuable for longer-range imaging sensors, and is also likely to become valuable for many other applications.
Speckle is inherent in laser radar measurements. When a beam of light from a laser transmitter illuminates a target, phase irregularities occur in the backscattered light coming from the backscattered regions. This is due to surface roughness on the scale of a wavelength of light. Interference between the various contributions to the optical field produces a speckle pattern of bright and dark intensity regions in the receiver. In many optical applications, speckle is considered a nuisance—it degrades target images obtained when conventional microwave-radar imaging techniques are applied to laser radar. However, researchers since the 1960s have known that the speckle pattern carries information about the physical properties of a target and have looked for ways to take advantage of the information contained in the speckle patterns.60
The shape, size, and distribution of speckle depends on a number of different target-related parameters, including the surface roughness, the angular tilt and movement of the surface, as well as sensor-related characteristics such as the laser wavelength and bandwidth, size of the patch being illuminated, angle of illumination, angle of detection, illumination intensity, and distance between the focal plane and speckle image plane. Speckle imaging techniques take advantage of the correlations between changes in speckle patterns due to small changes in one or more of these parameters.
A variety of laser speckle techniques have been used in biomedical applications. Some of these techniques use the statistics of the speckle patterns to get information about surface roughness or the nonlocalization properties for vision testing—for example, to measure the refraction of the eye. Speckle photography has been used to measure displacements, vibrations and strains.
Speckle techniques such as electronic speckle-pattern interferometry (ESPI) overcame this problem by storing speckle patterns electronically and using image processing techniques to compare the images in near real time. This method has been widely used for displacement measuring and vibration analysis. It takes advantage of movement of the target “surface” and produces correlation fringes that correspond to the object’s local surface displacements between two subsequent exposures, with or without
60 L. Shirley et al., 1992, “Advanced Techniques for Target Discrimination Using Laser Speckle,” Lincoln Lab. J. 5(3), 367-440.
FIGURE 3-13 LASCA images of a hand, showing the effect of occluded blood flow: (a) hand under normal conditions and (b) same hand when a blood-pressure cuff is inflated to reduce blood flow to the hand. SOURCE: J. Briers and S. Webster, 1996, “Laser speckle contrast analysis (LASCA): a nonscanning, full-field technique for monitoring capillary blood flow,” J. Biomedical Optics 1(2): 174.
an additional reference beam.61 The addition of a local oscillator phase and frequency modulation using an optical phase-locked loop to the ESPI system (OPL-ESPI) allows for the generation of Doppler speckle contours of a vibrating surface even from unstable sensor platforms.62
Laser speckle contrast analysis (LASCA) is another method that utilizes the movement of the “target” in that it measures flow velocity. Instead of using the temporal statistics of speckle like many other techniques, this method requires only a single exposure and uses the spatial statistics of the laser speckle to map the flow velocity at a particular instant in time without requiring scanning.63 With LASCA, a time-varying speckle pattern is captured by an imaging device with a finite integration time, causing some of the speckle fluctuations to be averaged out, or blurred. LASCA takes advantage of the fact that the ratio of the standard deviation to the mean intensity of the speckle patterns can provide a measure of the contrast of the pattern. If there is a lot of movement, the blurring will increase and the standard deviation of the intensity will decrease, resulting in a lower contrast. Conversely, if there is little
61 J. Garcia et al., 2008, “Projection of speckle patterns for 3D sensing,” J. Phys., Conf. Ser. 139-12026.
62 S. Moran et. al., 1987, “Optically phase-locked electronic speckle pattern interferometer,” Appl. Optics 26(3): 475.
63 J. Briers and S. Webster, 1996, “Laser speckle contrast analysis (LASCA): A nonscanning, full-field technique for monitoring capillary blood flow,” J. Biomedical Optics 1(2): 174.
movement, the blurring will decrease and the standard deviation will increase, resulting in a higher contrast. The mean intensity will remain unchanged.64 If the right wavelength is chosen, these noncontact velocity-measuring methods are able to see through skin and provide a velocity map of capillary blood flow in biomedical applications. Figure 3-13 shows an example of a blood flow map obtained by the LASCA technique.
Several techniques that take advantage of the information contained in speckle patterns have been investigated for the purpose of imaging earth-orbiting satellites with high resolution from ground based imaging systems. Imaging correlography and sheared-beam imaging (SBI) (also called sheared coherent interferometric photography (SCIP)) are able to produce 2-D intensity images of the targets of interest at potentially long ranges using lasers with reduced temporal coherence requirements, no local oscillator, and simple array detectors. A more detailed treatment of these techniques can be found in Voelz et al., 200265 and the references therein. The first of these techniques, imaging correlography,66,67 which uses a single laser and takes array measurements of intensity speckle patterns over multiple frames (with the assumption that the target or the array is moving relative to the other) has been demonstrated at ground-to-space distances on satellites.68 Sheared Coherent Imaging Photometry (SCIP) or Sheared Beam Imaging (SBI) uses three laser beams, which are displaced spatially and have temporally shifted frequencies, and processes the backscattered speckle patterns to form the images. The common-path geometry of the returned light from the three lasers makes this technique well suited to situations where turbulence is present near the receive aperture. A third imaging technique, Fourier telescopy, is also described in Voelz et al., 2002.69 While this technique is similar to SCIP in that it involves projecting triplets of laser beams and requires only light-bucket detection on the ground, it uses the information from the fringe pattern formed by the separated transmitters rather than the speckle return and so is not a speckle imaging technique.
Note that speckle photography or interferometry direct techniques can measure the distortion of an object but not its actual 3-D shape. Several techniques take advantage of other “sensor-side” changes to obtain information about the 3-D shape of the object. These techniques may involve taking measurements from multiple aspect angles, at different distances, with different illumination region sizes or at different wavelengths. In each case, it is important to make sure that the parameter being varied does not vary so much that the speckle images completely decorrelate, which ultimately requires a very application-specific system.
Wavelength diversity has been used to gather information about the size, shape, surface and rotation characteristics of objects based on the wavelength dependency of speckle intensity patterns. This can be done statistically (called wavelength decorrelation) to obtain the laser radar cross section of an object, which has been demonstrated with submillimeter range resolution in the laboratory. The decorrelation technique can be implemented using either direct detection, which requires a great deal of processing, or a form of coherent detection using a reference plane or point.70
64 A.B. Perimed, “Laser Speckle Contrast Analysis,” http://www.perimed-instruments.com/support/theory/laser-speckle-theory.
65 D.G. Voelz et al., 2002, “Ground-to-space laser imaging: review 2001,” Proc. SPIE 4489.
66 P.S. Idell, J.R. Fienup, and R.S. Goodman, 1987, “Image synthesis from nonimaged laser speckle patterns,” Opt. Lett. 12(11): 858.
67 J.R. Fienup and P.S. Idell, 1988, “Imaging correlography with sparse arrays of detectors,” Opt. Engr. 27: 778.
68 D.G. Voelz et al., 1984, “High-resolution imagery of a space object using an unconventional, laser illumination, imaging technique,” Proc. SPIE 2312: 202.
70 L. Shirley and G. Hallerman, 1996, “Applications of tunable lasers to laser radar and 3D imaging,” MIT Lincoln Lab., TR-1025.
FIGURE 3-14 3-D imaging concept based on speckle-pattern sampling. SOURCE: L. Shirley and G. Hallerman, 1996, “Nonconventional 3-D imaging using wavelength dependent speckle,” Lincoln Lab. J. 9(2): 153. Reprinted with permission of MIT Lincoln Laboratory, Lexington, Massachusetts.
Wavelength-diversity-based techniques can also be done deterministically. An example of this, called spectral pattern sampling (SPS), is based on the concept of sampling 3-D Fourier space of the scattering object. In the remainder of this section, the focus is on using speckle to obtain 3-D images or contours of a target. This application serves to highlight the key components and limitations of speckle imaging.
Figure 3-14 illustrates the SPS concept. This concept can be thought of as a 3-D extension of the imaging correlography technique mentioned earlier. An object is flood illuminated by a tunable, coherent laser beam and the radiation is sampled at a series of laser frequencies by a detector array. The resulting speckle pattern is measured with a detector array at equally spaced laser frequencies. Individual speckle frames are stacked to form a 3-D data array representing the 3-D Fourier transform squared of the image, and its 3-D Fourier transform yields the 3-D autocorrelation function of the 3-D image of the object.71 “The reflective reference point near the scattering object causes bright voxels to appear in certain regions of the 3-D array that represent the location in space of scattering cells on the surface of the object” this is similar to digital holography, with a glint near the object acting as the local oscillator.72 The 3-D image is “formed by recording the location of these bright voxels.”73 The speckle frames are stacked to form a 3-D array. According to Shirley “This array carries information about the location and complex amplitude of the scattering cells located on the surface. If both the amplitude and phase of the speckle pattern were known (as is possible with digital holography), the 3-D Fourier transform of the speckle’s complex amplitude would provide the target’s 3-D image. However, because phase information is lost in the direct-detection process, the spatial autocorrelation function of the 3-D image is obtained instead.”74
74 L.G. Shirley, Reconstruction of 3D Target Images from Wavelength-Dependent Speckle Intensity. Available at http://www.physics.uci.edu/~isis/Yountville/Shirley.pdf. Accessed on March 14, 2014.
FIGURE 3-15 Comparison between contact measurement and SPS measurement. (a) stamped sheet-metal test object (100 × 100 × 20 mm), (b) coordinate-measuring-machine (CMM) contact measurement of surface contour of test object, and (c) SPS measurement of surface contour of test object. SOURCE: L. Shirley and G. Hallerman, “Nonconventional 3D Imaging Using Wavelength Dependent Speckle,” Lincoln Lab. J. 9(2), 153-186 (1996). Reprinted with permission of MIT Lincoln Laboratory, Lexington, Massachusetts.
This type of frequency-based imaging technique requires a setup that is designed for the application and target class of interest. The angular resolution depends on both the scan length and detector array size. The larger the volume of the Fourier space sampled, the better the achievable angular resolution.77 The x and y resolutions are limited by the detector array size (Wx, Wy) and the distance between the image plane and the detector plane (Zd) and are given by
The raw range resolution is inversely proportional to the frequency tuning range (B) used and is given by
For example, for a 10 mm × 10 mm detector array, a distance of 1 m, a wavelength of λ = 0.8 µm, and a frequency scan of 5 THz, one can achieve resolutions of Δx = Δy = 80 µm and range resolution of Δz = 30 µm. This method compares well with contact measurements, as shown in Figure 3-15, but does not require contact with the test object and can be collected in a fraction of the time (30 min versus 10 hr for the test in Figure 3-15).
As mentioned above, the speckle imaging system must be specifically designed and tuned for the application of interest. The frequency span must be chosen on the order of the depth of the features of interest. The frequency steps for the 3-D imaging method described above must also be chosen with the range extent of the target in mind in order to avoid aliasing. The larger the range extent of the target, the smaller the frequency step must be.
Laser requirements are an important consideration for all speckle imaging techniques. This technology requires coherent illumination with narrow linewidths in order to record the speckle patterns and avoid decorrelation. The narrow linewidth and tunability required may be considered constraints of the technology, as the current cost of these lasers is significant. However, the coherence constraints of the laser for speckle imaging are much less stringent than for optical coherent detection. In this case, the
75 J. Fienup et al., 1999, “3-D imaging correlography and coherent image reconstruction,” Proc. SPIE 3815, Digital Image Recovery and Synthesis IV, 60, September 30.
76 L. Shirley and G. Hallerman, 1996, “Nonconventional 3D imaging using wavelength dependent speckle,” Lincoln Lab. J. 9(2): 153.
coherence length must be only twice as long as the range extent of the target, and coherence between the laser frequency steps is not required.
Aberrations also impact the performance of speckle imaging. Wavelength aberrations, detector-plane distortions, and depth-of-field aberrations can also degrade the image return. Compensation for these aberrations can be obtained by digital compensation or altering the sensing configuration; in some cases, however, custom optics may be required. Because the speckle return can be small, sensitive cameras with low noise and high frame rates are desired.
The speckle technique above requires a reference beam in order to form the image. Other methods that allow for imaging without the reference beam have been described,78 but they are limited to the depth of field of conventional imaging. Additionally, significant research has been carried out to obtain 3-D images from the 3-D autocorrelation functions, which do not require a reference point.79,80,81 Lensless coherent imaging is another alternative method in which a support constraint for a low-resolution image (instead of using a triple intersection of the autocorrelation support) in combination with intensity measurements over a large aperture, can be used to reconstruct fine-resolution images.82,83
Any technique involving speckle imaging requires narrow linewidth, coherent lasers. For applications taking advantage of multiple wavelengths, tunable lasers are also highly desirable. Since the 3-D imaging technique described above requires stepping through multiple wavelengths during the scan, rapidly tunable lasers or multiwavelength lasers will reduce the scan time necessary.
Speckle imaging techniques such as LASCA and ESPI are widely described in the open literature and new developments in these areas would also likely be widely published. Speckle imaging techniques like the one detailed above are still relatively immature and further developments in this research area would also be published. Published literature on the development of laser systems described above would be an indicator. In addition, research on imagery from autocorrelation functions that can enable improved performance may be a sign of work to further develop this area.
This technology allows noninvasive, very-high-resolution imaging. Movements can be visualized on a very small scale (blood flow, etc.) without the need to scan. Speckle imaging can also be used to visualize shear, stress, and strain measurement; vibration mode analysis; and nondestructive testing. It can measure complex and irregular shapes having discontinuities and steep sloped surfaces. Formation of 3-D images is possible with simple receivers. Speckle imaging does not require LOs, although they may be used in some applications. Furthermore, although a coherent laser source is required, the required coherence length depends only on the range extent of the target rather than the full round-trip travel time, and coherence between frequency steps is not required.
The 3-D technique detailed above has not been demonstrated at long ranges, although the 2-D imaging correlography technique mentioned earlier has. Turbulence may limit the correlation between measurements; however this technique is less sensitive to atmospheric turbulence than direct imaging. As mentioned previously, these measurements typically require a very controlled setting for optimal performance, especially at high resolution. Another drawback is the high cost of fast-scanning tunable lasers if frequency variation is the phenomenon being exploited. Furthermore, the processing requirements for the sensing mode described above are very stringent, although real-time processing will become feasible as processer technology continues to evolve at a rapid pace. It should be noted that 2-D
78 L. Shirley and G. Hallerman, 1996, “Nonconventional 3D imaging using wavelength dependent speckle,” Lincoln Lab. J. 9(2): 153.
79 Fienup et. al. 1982, “Reconstruction of the support of an object from the support of its autocorrelation,” J. Opt. Soc. Am. 72(5): 610.
80 Paxman et al., 1994, “Use of an opacity constraint in three-dimensional imaging,” Proc. SPIE 2241: 116.
81 J.M. Fini, 1997, “Three dimensional image reconstruction from Fourier magnitude measurements,” BS/MS Thesis, Dept. EECS, MIT, Cambridge, Mass.
82 J.R. Fienup and A. M. Kowalczyk, 1990 “Phase retrieval for a complex-valued object by using a low resolution image,” J. Opt. Soc. Am. A 7(3): 450.
83 H.N. Chapman et al., 2006, “High-resolution ab initio three-dimensional x-ray diffraction microscopy,” J. Opt. Soc. Am. A 23: 1179.
Speckle technology is currently used in biomedical applications and for nondestructive testing for stress/strain, vibration mode analysis, etc. There are a number of potential applications of wavelength-dependent speckle in advanced manufacturing, machine vision, industrial inspection, robotics, and dimensional metrology. These methods can also be used to obtain laser radar cross sections, surface slope maps, or even 2-D and 3-D images of targets.
In theory, because the range resolution does not degrade with range, it could be attractive for applications involving long-range imaging.
Conclusion 3-7: Speckle imaging can be used to measure 3-D shape, surface roughness, displacements, vibrations, and strains.
As discussed in more detail in Chapter 4, broad-gain-bandwidth lasers such as those based on the solid state material Ti:sapphire can generate (ultrashort) pulses with femtosecond duration, through the process of mode-locking. Amplification of the pulses to high energies is possible through the technique of chirped-pulse amplification (CPA), and current technology has made it possible to generate pulse energies in the Joule range, with durations on the order of 20 fs. The first ultrashort-pulse lasers were based on liquid-dye gain media, required considerable efforts to maintain operation, and were totally unsuited for use outside the laboratory. With the development of Ti:sapphire and other solid-state-laser-based ultrafast lasers, it became possible to build systems that could be used for ladar and lidar systems, and deployed in mobile laboratories for field measurements. Further developments now under way of fiber-format solid state lasers promise even more possibilities for field-deployable ultrafast lasers.
The simplest application for an ultrafast source would be in single-pulse, time-of-flight ranging ladar systems, from one to three dimensions. Since a 20-fs pulse in air has a spatial dimension of 6 µm, it could provide range resolution, with appropriate processing, at the multimicron level. Unfortunately, sensitive detectors with the necessary response time do not yet exist. And even without appropriate detectors, there are two major limitations to long-range, time-of-flight sensing with ultrafast sources. First, the spectral dispersion of the atmosphere, normally not a concern for nanosecond-duration pulses, cannot be ignored with femtosecond pulses and leads to significant pulse stretching for light at short wavelengths. This can be countered, in principle, by adding frequency chirp to the transmitted pulse that is undone by the atmosphere in the round-trip path but that presumes a good knowledge of the distance to the target and a full understanding of the exact atmospheric conditions along the path. Another limitation is in the pulse energy, since nonlinear atmospheric effects, such as the nonlinear refractive index of the atmosphere, stimulated Raman scattering, and self-focusing, can distort the pulse in a nontrivial way. This is discussed in more detail in the next section, where the use of the nonlinear effects is a benefit rather than a limitation.
Femtosecond lasers can be used for precision range measurements, though other techniques are less subject to their limitations for simple, single-pulse, time-of-flight measurements. Here the frequency-spectrum characteristics of the mode-locked laser are employed. One approach uses the femtosecond source with a conventional swept-frequency (FM) ladar system as a means to accurately determine the frequency of the ladar source as a function of time, and thus provide higher accuracy in the lidar measurement. The femtosecond laser spectrum consists of a “comb” of frequencies centered about the
84 D.G. Voelz, S.D. O’Keefe, J.D. Gonglewski, D.B. Rider, and K.J. Schulze, 1994, “High-resolution imagery of a space object using an unconventional, laser illumination, imaging technique,” Proc. SPIE 2312.
85 D.G. Voelz, J.F. Belsher, L. Ulibarri, and V. Gamiz, 2002, “Ground-to-space laser imaging: Review 2001,” Proc. SPIE 4489.
laser peak wavelength. The frequency spacing can be locked to an RF standard with 10-8 accuracy. By heterodyning a portion of the FM ladar transmitter output against the comb of the femtosecond source, one gets a precise sample of the sweep at frequency intervals set by the comb frequency spacing, a better and more accurate measurement than, say, using an optical interferometer as a sampling means. Operation of an FM lidar around 1,560 nm, based on an external-cavity diode laser with 1-msec-duration, 1-THz-span sweeps is claimed to result in a bandwidth-limited 130-µm range resolution and 100 nm accuracy, with 1-msec update rates.86
Clearly, use of a continuous wave (CW) comb as the source for a ladar, while lowering the peak power and greatly updating the acquisition rate, would suffer from the range ambiguity set by the pulse rate, 1.5 m for a typical 100-MHz-rate comb. One approach to precision ranging with a longer ambiguity region employs one femtosecond laser as the transmitter and a second femtosecond laser as the LO in a heterodyne-detection ladar.87
One comb serves as the “signal” source and samples a distance path defined by reflections off a target and reference plane. The second comb serves as a broadband local oscillator (LO), and recovers range information in an approach equivalent to linear optical sampling (that is, a heterodyne cross-correlation between the signal and LO). The heterodyne detection provides shot-noise limited performance so that even weak return signals can be detected and the information in the carrier phase is retained.
The pulse rate of the LO is slightly offset from that of the transmitter. Figure 3-16 shows the time-domain picture, for nominal 100-MHz-rate pulses, with a 5.2 kHz difference in the rates. Provided that the two combs are coherent, locked to the same optical standard and with the RF rates held to very high precision, the system can, through just the time-of-flight measurement, provide range precision of 3 µm over the 1.5-m ambiguity range with a 200-µs measurement time, and with determination of the optical phase of the return, provides a 5-nm precision over a 60 ms averaging time.
Most significantly, if the transmit and LO combs are interchanged, it is possible, by calculating the difference in the ranges measured, to determine the integer multiple of the number of pulses between the transmitter and target and thereby increase the range ambiguity to the pulse propagation velocity divided by twice the 5-kHz difference frequency, extending the ambiguity range, for the case shown, to 30 km.
In a real atmosphere rather than the vacuum of space, the precisions predicted would not be possible due to atmospheric fluctuations, which limit measurement precisions to 1 part in 107.88 Simpler dual-comb ladar systems, based on free-running mode-locked Er:fiber lasers, can achieve a ranging precision of 2 µm in 140-µs acquisition time, increasing to 0.2 nm with a 20-ms time,89 with a 1-m range ambiguity. The latter can be eliminated through, say, an adjunct, nanosecond-pulse time-of-flight ranging system run over the same sensing path. Another approach to a simplified dual-comb system encodes the transmitted pulse with a pseudo-random binary modulation, and in one demonstration a pair of 100-MHz combs with variable timing between one comb and the other were employed, and the coding allowed expansion of the ambiguity range to 190.5 m.90
86 E. Baumann, F.R. Giorgetta, I. Coddington, L.C. Sinclair, K. Knabe, W.C. Swann, and N.R. Newbury, 2013, “Comb-calibrated frequency-modulated continuous-wave ladar for absolute distance measurements,” Opt. Lett. 38: 2026.
87 I. Coddington, W.C. Swann, L. Nenadovic, and N.R. Newbury, 2009, “Rapid and precise absolute distance measurements at long range,” Nature Photonics 3: 351.
88 N. Bobroff, 1993, “Recent advances in displacement measuring interferometry,” Meas. Sci. Technol. 4: 907.
89 T.-A. Liu, N.R. Newbury, and I. Coddington, 2011, “Sub-micron absolute distance measurements in sub-millisecond times with dual free-running femtosecond Er fiber-lasers,” Opt. Exp. 19: 18501.
90 M. Godbout, J.-D. Deschenes, and J. Genest, 2004, “Spectrally resolved laser ranging with frequency combs,” Opt. Exp. 18: 15981.
FIGURE 3-16 “Dual-comb ranging concept. (a) A high-repetition-rate “signal” source transmits pulses that are reflected from two partially reflecting planes (glass plates): the reference (r) and the target (t). The reference is a flat plate and yields two reflections, the first of which is ignored. Distance is measured as the time delay between the reflections from the back surface of the reference flat and the front of the target. (b) The signal pulses are detected through linear optical sampling against an LO. The LO generates pulses at a slightly different repetition rate. Every repetition period (Tr), the LO pulse “slips” by ΔTr relative to the signal pulses and samples a slightly different portion of the signal. Consecutive samples of the overlap between the signal and LO yield a high-range-resolution measurement of the returning target and reference pulses. Actual data are shown on the right side, where each discrete point corresponds to a single digitized sample and only the immediate overlap region is shown. (c) The measured voltage out of the detector in both real time (lower scale) and effective time (upper scale) for a target and reference plane separated by 0.76 m. A full “scan” of the LO pulse across the signal pulse is accomplished every ~200 ms in real time and every ~10 ns in effective time. Two such scans are shown to illustrate the fast, repetitive nature of the measurement. Also seen are two peaks in grey which are spurious reflections of the launch optics.”
SOURCE: Reprinted with permission from Macmillan Publishers Ltd: Nature Photonics. I. Coddington, W. Swann, L. Nenadovic, and N. Newbury, 2009, “Rapid and precise absolute distance measurements at long range,” Nature Photonics 3(6).
Conclusion 3-8: While the short pulses that can be generated with femtosecond lasers could allow the measurement of range to be improved by 5 orders of magnitude compared to conventional nanosecond-based sensors, dispersion and fluctuations in the atmosphere prevent such a dramatic result, and nonlinearities in the atmosphere limit the pulse energy.
Conclusion 3-9: In space, use of the precision frequency combs generated by stabilized femtosecond lasers would allow nanometer-precision range measurements with 30-km-level range ambiguities.
Conclusion 3-10: Femtosecond-laser-based ranging systems would enable deployment of satellite arrays with optical-wavelength-level measurement and control of spacings, enabling distributed-aperture sensors.
Recommendation 3-1: Further development of femtosecond-laser ranging systems is of importance to the development of space-based sensors with significantly improved performance and should be encouraged in the United States and monitored in other countries.
As is discussed in more detail below, if the single-pulse peak intensity from a femtosecond laser exceeds a certain level in the atmosphere, self-focusing due to the nonlinearity of the molecules in the atmosphere leads to the formation of a “filament” of light, where the beam size remains constant over a wider range of distance than allowed by diffraction. Laser-induced breakdown spectroscopy (LIBS), discussed in Chapter 2, can obtain an extension in range to several hundred meters through the use of filaments, and work is ongoing to determine the efficacy of the technique, compared to conventional, nanosecond-duration-excited LIBS.91,92
In Chapter 4 the fundamentals of the development of supercontinuum (SC) sources based on optical fibers are discussed. The sources provide very broad, nearly structureless optical spectra similar to that from incandescent (blackbody-like) sources, but with many orders of magnitude brighter. For path-averaged differential absorption lidar (DIAL) systems, SC sources provide more data than limited-tuning-range lasers, and especially in complex atmospheric environments, provide much more data that can help to identify and quantify different species. At present, sources based on silica fibers are readily available, generating wavelengths extending to 2,450 nm, but there is active development of sources based on other types of glass fibers to generate wavelengths extending in to the long-wave IR region.
Figures 3-17 and 3-18 show the experimental setup and atmospheric data generated, respectively, for one SC-based, path-averaged system, where the source is based on a Q-switched microchip laser and commercial photonic-crystal fibers and detection is done through use of a commercial optical spectrum analyzer (OSA) to determine the path transmission as a function of wavelength. Through the maximum likelihood estimation (MLE) method, the calculated concentration levels of species are found to converge rapidly to a narrow error level with an increase in the number of matched individual absorption lines from
91 Ph. Rohwetter, K. Stelmaszczyk, L. Woste, R. Ackermann, G. Mejean, E. Salmon, J. Kasparian, J. Yu, and J.- P. Wolf, 2005, “Filament-induced remote surface ablation for long range laser-induced breakdown spectroscopy operation,” Spectrochimica Acta Part B 60: 1025.
92 C.G. Brown, R. Bernatha, M. Fishera, M.C. Richardson, M. Sigman, R.A. Walters, A. Miziolek, H. Bereket, and L.E. Johnson, 2006, “Remote femtosecond laser induced breakdown spectroscopy (LIBS) in a standoff detection regime,” in Enabling Technologies and Design of Nonlethal Weapons, G.T. Shwaery, J.G. Blitch, and C. Land, ed., Proc. of SPIE 6219: 62190B.
FIGURE 3-17 Experimental setup for path-averaged DIAL based on fiber SC.
SOURCE: D. Brown, K. Zhiwen Liu, and C. Philbrick, “Long-path supercontinuum absorption spectroscopy for measurement of atmospheric constituents,” 2008, Optics Express 16 (12): 8457.
FIGURE 3-18 Data generated by the system shown in Figure 3-17 over a 300-m outdoor path with compared to standard model of atmospheric transmission for water vapor.
SOURCE: D. Brown, K. Zhiwen Liu, and C. Philbrick, 2008, “Long-path supercontinuum absorption spectroscopy for measurement of atmospheric constituents,” Optics Express 16 (12): 8457.
10 to 500, showing the advantage of SC-based DIAL over conventional DIAL.93 Further measurements have shown the ability, over a 600-m path, with wavelengths in the 760-nm region, to calculate the oxygen concentration to an accuracy of 2 × 10-4.94
Another approach to measurements with SC sources has employed a Fourier-transform spectrometer to analyze the return signals with a processing time of several seconds.95
Even faster data processing would be possible through the combination of a grating or other dispersive element along with a multielement detector, although the ultimate spectral resolution would still be less than possible with Fourier-transform techniques.
Use of SC sources for spectroscopy-based ranging systems (hyperspectral lidar) has been demonstrated for the discrimination of ground reflections from trees and inorganic material, while also generating range information, to date at short (10-m) ranges but with the expectation of eventual deployment to longer-range, airborne platforms.96 Simulations on the use of SC sources for aerosol backscatter illustrate the possible advantages of being better able to characterize the aerosols by determining the wavelength variation of the scattered signal.97
More complex and higher-performance DIAL systems are based on the frequency combs generated by mode-locked lasers, with techniques similar to that used for the precision ranging described in the preceding section. Consider two frequency combs that are slightly offset in their repetition rate and mixed together on a detector. The resultant signal also contains a comb of frequencies, with each frequency representing a specific optical comb frequency. The optical combs have been essentially converted into combs that can be readily processed by high-speed electronics, and a Fourier-transform of the detected signal can extract the intensity information for each optical frequency. If an absorbing gas is placed in one (or both) of the beam paths, the Fourier transform will exhibit the absorption features of the gas in the wavelength region covered by the comb.
In one effort, the outputs of two 10-20 fs, 800-nm Ti:sapphire lasers were focused into GaSe crystals optimized to generate difference-frequency output in the 8-12 µm “fingerprint” region of the IR.98Figure 3-19a shows the detected signal with and without NH3 in the beam path, with Figure 3-19b showing the resultant Fourier-transformed spectra, with 2 cm-1 of resolution, along with a comparison of data from a conventional, interferometer-based Fourier-transform spectrometer. The notable result with the comb-based source is that the spectral data resulted from a 70-µs signal, compared to the 60 s required for the conventional spectrometer, thus pointing the way to rapid data acquisition, as might be desired for a system that scans large regions for the presence of gases. Another system employed two fully stabilized combs locked to a common frequency standard, with a 1 kHz difference in pulse rate, and was able to fully resolve 155,000 different comb lines in the 1550-nm region to perform high-resolution spectroscopy of HCN gas.99
93 D.M. Brown, K. Shi, Z. Liu, and C.R. Philbrick, 2008, “Long-path supercontinuum absorption spectroscopy for measurement of atmospheric constituents,” Opt. Express 16: 8457.
94 P.S. Edwards, D.M. Brown, A.M. Wyant, Z. Liu, and C.R. Philbrick, 2009, “Atmospheric absorption spectroscopy using supercontinuum lasers,” in Conference on Lasers and Electro-Optics /International Quantum Electronics Conference, OSA Technical Digest (CD), Optical Society of America, CFJ3.
95 J. Mandon, E. Sorokin, I.T. Sorokina, G. Guelachvili, and N. Picqué, 2008, “Supercontinua for high-resolution absorption multiplex infrared spectroscopy,” Opt. Lett. 33: 285.
96 Y. Chen, E. Räikkönen, S. Kaasalainen, J. Suomalainen, T. Hakala, J. Hyyppä, and R. Chen, 2010, “Two-channel hyperspectral LiDAR with a supercontinuum laser source,” Sensors, 10: 7057.
97 S. Kaasalainen, T. Lindroos, and J. Hyyppä, 2007, “Toward hyperspectral lidar: measurement of spectral backscatter intensity with a supercontinuum laser source,” IEEE Geoscience And Remote Sensing Letters 4: 211.
98 A. Schliesser, M. Brehm and F. Keilmann, 2005, “Frequency-comb infrared spectrometer for rapid, remote chemical sensing,” Opt. Express 13: 9029.
99 I. Coddington, W.C. Swann, and N.R. Newbury, 2008, “Coherent multiheterodyne spectroscopy using stabilized optical frequency combs,” Phys. Rev. Lett. 100: 013902.
FIGURE 3-19 “(A) Single interferograms from dual-comb system with 70-µs acquisition time window each (16 µs displayed, t = 0 arbitrarily chosen at window center), without (background) and with a gas absorption cell (NH3). (B) IR spectra calculated from (A) by Fourier transformation, vs. both RFand IR frequency scales; background spectrum with maximum normalized to 1 (dotted), NH3 cell transmittance (full black). For comparison the red trace gives the transmittance of conventional Fourier transform infrared (FTIR) obtained at 2 cm-1 resolution and 60 s acquisition time, with 32 spectra averaged.” SOURCE: A. Schliesser, M. Brehm and F. Keilmann, 2005, “Frequency-comb infrared spectrometer for rapid, remote chemical sensing,” Opt. Express 13: 9029.
An alternative to dual-comb systems is the National Institute of Standards (NIST)-developed, high-resolution, crossed-spectral disperser to project the various frequency-comb modes from a single laser onto a two-dimensional digital camera. The NIST design employs a side-entrance etalon called a virtually imaged phased array (VIPA) disperser that provides ~1 GHz resolution in the visible spectral range (and a resolution of ~500 MHz at 1,550 nm).100 When combined with a lower-dispersion grating in the orthogonal spatial direction, 5–10 THz of bandwidth can be captured in a single measurement taking a few milliseconds.101 The technique has shown the ability to detect absorption levels in the 1 × 10-9 cm-1 range in the 1,500-nm region, with long-pass cells or cavity-enhanced systems. Shifting to mid-IR combs
100 S. Diddams, 2010 “The evolving optical frequency comb [Invited],” J. Opt. Soc. Am. B 27: B51-B62.
101 S.A. Diddams, 2010, “The evolving optical frequency comb [Invited],” J. Opt. Soc. Am B27: 851.
would allow an increase of one to three orders of magnitude in sensitivity, providing, theoretically, parts-per-trillion sensitivities for certain gases.102
One of the most dramatic applications of femtosecond lasers has been in the use of atmospheric filaments generated by the launching of high-power femtosecond lasers into the atmosphere. Lidar applications include aerosol measurements, DIAL and laser-induced fluorescence (LIF), discussed below.
A key concept behind filament formation is the self-focusing effect103 for laser beams, where propagation in a medium with a positive value of nonlinear refractive index leads to a collapse of the beam diameter from the intensity-created positive lens in the medium. Analysis of the effect shows that the threshold for it to occur is a function of the peak power in the beam, not the intensity. In the following discussions, all of the results apply to the use of 800-nm Ti:sapphire lasers, which have predominated in filament studies.
In atmospheric-pressure air, the threshold power for self-focusing has been measured to be about 10 GW for a 42-fs laser pulse, gradually decreasing to about 5 GW for a chirped pulse with duration longer than 200 fs.104 The change is due to the frequency-dependent nature of the atmospheric nonlinear refraction. Self-focusing alone would not result in filaments but would instead bring about optical breakdown, as is commonly seen inside solids as damage spots.
In air, filaments appear as a result of the dynamic balance between self-focusing and defocusing by the plasma produced from the air molecules.105,106,107 In contrast to self-focusing, which is set by the peak power of the beam, the plasma defocusing does depend on intensity of the beam, and if it is too small, the filament will not form. On the other hand, the equilibrium places an upper limit on the laser intensity inside the filament core to about 5 × 1013 W/cm2, so called “intensity clamping.”108 The single-filament beam diameter is about 100 µm, and filaments can typically extend tens to hundreds of meters, orders of magnitude longer than the Rayleigh length for a conventional beam of that diameter. Focusing to higher intensities results in the formation of multiple filaments in one beam. The peak intensity inside a single filament is high enough to dissociate/ionize other gas molecules, generate higher harmonics, induce other parametric processes as well as generate THz radiation,109 explode dust particles and aerosols or induce partial breakdown on solid targets,110 as noted in the prior section on femtosecond LIBS.
102 M.J. Thorpe, D. Balslev-Clausen, M.S. Kirchner, and J. Ye, 2008, “Cavity-enhanced optical frequency comb spectroscopy: Application to human breath analysis,” Opt. Express 16: 2387.
103 R.Y. Chiao, E. Garmire, and C.H. Townes, 1964, “Self-trapping of optical beams,” Phys. Rev. Lett. 13: 479.
104 W. Liu, and S.L. Chin, 2005, “Direct measurement of the critical power of femtosecond Ti:sapphire laser pulse in air,” Opt. Express 13: 5750.
105 A. Couairon and A. Mysyrowicz, 2007, “Femtosecond filamentation in transparent media,” Phys. Rep. 441: 47.
106 L. Berge, S. Skupin, R. Nuter, J. Kasparian, and J.-P. Wolf, 2007, “Ultrashort filaments of light in weakly ionized, optically transparent media,” Rep. Prog. Phys. 70: 1633.
107 V.P. Kandidov, S.A. Shlenov, and O.G. Kosareva, 2009, “Filamentation of high-power femtosecond laser radiation,” Quant. Electron. 39: 205.
108 J. Kasparian, R. Sauerbrey, and S.L. Chin, 2000 “The critical laser intensity of self-guided light filaments in air,” Appl. Phys. B71: 877.
109 C. D’Amico, A. Houard, M. Franco, B. Prade, A. Mysyrowicz, A. Couairon, and V.T. Tikhonchuk, 2007, “Conical forward THz emission from femtosecond-laser-beam filamentation in air,” Phys. Rev. Lett. 98: 235002.
110 K. Stelmaszczyk, P. Rohwetter, G. Méjean, J. Yu, E. Salmon, J. Kasparian, R. Ackermann, J.-P. Wolf, and L. Wöste, 2004, “Long-distance remote laser-induced breakdown spectroscopy using filamentation in air,” Appl. Phys. Lett. 85: 3977.
111 G. Méchain, G. Méjean, R. Ackermann, P. Rohwetter, Y.-B. André, J. Kasparian, B. Prade, K. Stelmaszczyk, J. Yu, E. Salmon, W. Winn, L.A. Schlie, A. Mysyrowicz, R. Sauerbrey, L. Wöste, and J.-P. Wolf, 2005, “Propagation of fs TW laser filaments in adverse atmospheric conditions,” Appl. Phys. B80: 785.
112 R. Salame, N. Lascoux, E. Salmon, R. Ackermann, J. Kasparian, and J.-P. Wolf, 2007, “Propagation of laser filaments through an extended turbulent region” Appl. Phys. Lett. 91: 171106.
energy (energy reservoir) surrounding filaments can play an important role in their formation.113 When particles in the propagation path like water droplets, snow or dust, block the filament, the energy in the reservoir will refill the filament core (replenishment).114 Such filamentation properties clarify why filaments can be formed and propagate under adverse atmospheric conditions such as rain, compared to the linear propagation of the beam. The background can contain up to 90 percent of the pulse energy, which is beneficial for maintaining the filament formation.115 Calculations of the spatial evolution of filaments are complicated by the high level of nonlinearities and provide a major challenge to numerical modeling.
Filaments in the atmosphere, in common with high-intensity propagation of light in fibers, will generate SC emission, from the UV to the IR. The generation of the SC is assumed to be primarily the result of spectral broadening of the laser energy by self-phase modulation. Emission in the UV is enhanced via third-harmonic generation in the atmosphere, which mixes with the SC generated by self-phase-modulation of the fundamental.116Figure 3-20 shows the laboratory-measured spectra of the SC light for different levels of frequency chirp in the pulse as well as different pulsewidths.117 The pulse chirp can be controlled to correct for atmospheric dispersion over a given path so the pulsewidth is minimized (and peak power maximized) at the desired location in the atmosphere. Subsequent measurements of the intensity of backscattered light from atmospheric filaments showed an enhancement in the amount of light beyond that expected by Rayleigh scattering, and this was proposed to be the result of longitudinal index variations in the filament, acting as a Bragg reflector to the generated SC.118 Spectral measurements of the SC produced over a long vertical path in the atmosphere and reflected from a cloud at 4 km indicated a much higher level of energy in the 1,000-2,000-nm region than indicated by Figure 3-20, by about an order of magnitude, attributed to the much longer generation path than that in the laboratory.119
The ability to generate high intensities at long distances and create UV-IR SC light at some distance above ground suggested applications of filaments to various lidar applications, and led to the construction and deployment of the Teramobile lidar system,120 built in 2000-2001 as part of a French-German effort. The Ti:sapphire laser (supplied by Thales in France) has the specifications listed in Table 3-2, and the receiver employs a 40-cm-diameter telescope, along with a variety of detectors and a 50-cm spectrograph for spectral analysis. A plan drawing and photograph of the system appear in Figure 3-21. Figure 3-22 shows a nighttime photograph of the SC light generated by the Teramobile system, and Figure 3-23 provides both aerosol and DIAL data generated from the system, the latter showing H2O
113 M. Mlejnek, E.M. Wright, and J.V. Moloney, 1999, “Moving-focus versus self-waveguiding model for long-distance propagation of femtosecond pulses in air,” IEEE J. Quant. Electron. 35: 1771.
114 F. Courvoisier, V. Boutou, J. Kasparian, E. Salmon, G. Méjean, J. Yu, and J.-P. Wolf, 2003, “Ultra-intense light filaments transmitted through clouds,” Appl. Phys. Lett. 83: 213.
115 W. Liu, F. Théberge, E. Arévalo, J.F. Gravel, A. Becker, and S.L. Chin, 2005, “Experiment and simulations on the energy reservoir effect in femtosecond light filaments,” Opt. Lett. 30: 2602.
116 L. Bergé, S. Skupin, G. Méjean, J. Kasparian, J. Yu, S. Frey, E. Salmon, and J.P. Wolf, 2005, “Supercontinuum emission and enhanced self-guiding of infrared femtosecond filaments sustained by third-harmonic generation in air,” Phys. Rev. E71: 016602.
117 J. Kasparian, R. Sauerbrey, D. Mondelain, S. Niedermeier, J. Yu, J.-P. Wolf, Y.-B. André, M. Franco, B. Prade, S. Tzortzakis, A. Mysyrowicz, A.M. Rodriguez, H. Wille, and L. Wöste, 2000, “Infrared extension of the supercontinuum generated by femtosecond terawatt laser pulses propagating in the atmosphere,” Opt. Lett. 25: 1397.
118 J. Yu, D. Mondelain, G. Ange, R. Volk, S. Niedermeier, J.-P. Wolf, J. Kasparian, and R. Sauerbrey, 2001, “Backward supercontinuum emission from a filament generated by ultrashort laser pulses in air,” Opt. Lett. 26: 533.
119 G. Mejean, J. Kasparian, E. Salmon, J. Yu, J.- P. Wolf, R. Bourayou, R. Sauerbrey, M. Rodriguez, L. Woste, H. Lehmann, B. Stecklum, U. Laux, J. Eisloffel, A. Scholz, and A.P. Hatzes, 2003, “Towards a supercontinuum-based infrared lidar,” Appl. Phys. B77: 357.
120 H. Wille, M. Rodriguez, J. Kasparian, D. Mondelain, J. Yu, A. Mysyrowicz, R. Sauerbrey, J.-P. Wolf, and L. Woste, 2002, “Teramobile: A mobile femtosecond-terawatt laser and detection system,” Eur. Phys. J. AP 20: 183.
FIGURE 3-20 “Measured spectrum of the supercontinuum generated in the center of the beam by 2-TW laser pulses. The results are shown for two different chirp settings that correspond to an initial pulse duration of 35 fs without chirp after the compressor (filled symbols) and a 55-fs initial pulse duration with negative chirp after the compressor (open symbols). Inset, spectrum of the SC generated in the center of the beam by 100 fs pulses as a function of pulse power value (200 and 100 mJ for 2 and 1 TW, respectively). The two curves have the same normalization factor (24).” SOURCE: J. Kasparian, R. Sauerbrey, D. Mondelain, S. Niedermeier, J. Yu, J.-P. Wolf, Y.-B. André, M. Franco, B. Prade, S. Tzortzakis, A. Mysyrowicz, A. M. Rodriguez, H. Wille, and L. Wöste, 2000, “Infrared extension of the supercontinuum generated by femtosecond terawatt laser pulses propagating in the atmosphere,” Opt. Lett. 25: 1397.
TABLE 3-2 Specifications for Ti:sapphire Laser in the Teramobile Lidar System
|Center wavelength||793 nm|
|Pulse energy||350 mJ|
|Pulse duration||70 fs (sech2)|
|Peak power||5 TW|
|Repetition rate||10 Hz|
|Output beam diameter||50 mm|
|Chirped pulse duration||70 fs to 2 ps, positive or negative chirp|
|Energy stability||2.5 percent RMS over 400 shots|
|Dimensions||3.5 × 2.2 m|
SOURCE: H. Wille, M. Rodriguez, J. Kasparian, D. Mondelain, J. Yu, A. Mysyrowicz, R. Sauerbrey, J.-P. Wolf, and L. Woste, 2002, “Teramobile: A mobile femtosecond-terawatt laser and detection system,” Eur. Phys. J. AP 20: 183. With kind permission of The European Physical Journal (PEG).
FIGURE 3-21 Plan for Teramobile lidar system (left) and photograph of system (right), built in a standard ISO 20 ft sea container. SOURCE: Copyright Teramobile. Used with permission.
FIGURE 3-22 Nighttime photograph of SC light generated by a vertically directed beam from the Teramobile system. SOURCE: Copyright Teramobile. Used with permission.
vapor and O2 absorption lines.121 In principle, spectral analysis of the near-IR data can provide (through the intensity of the water absorption lines) a probe of humidity and (through analysis of the spectral distribution of O2 lines around 760 nm) a probe of atmospheric temperature. Under some conditions, SC light from as high as 20 km has been observed.
Other applications of the Teramobile system have been in aerosol characterization, through the simultaneous generation of backscatter from a wide variety of wavelengths. In particular, the broad SC spectrum allows probing clouds and determining not only size distributions of the aerosols, but also,
121 J. Kasparian, M. Rodriguez, G. Méjean, J. Yu, E. Salmon, H. Wille, R. Bourayou, S. Frey, Y.-B. Andre, A. Mysyrowicz, R. Sauerbrey, J.-P. Wolf, and L. Wöste, 2003, “White-light filaments for atmospheric analysis,” Science 301: 61.
FIGURE 3-23 “(A) Schematic of the Teramobile lidar experimental setup. Before launch into the atmosphere, the pulse is given a chirp, which counteracts group velocity dispersion during its propagation in air. Hence, the pulse recombines temporally at a predetermined altitude, where a white light continuum is produced and then is backscattered and detected by the receiver. (B) Vertical SC aerosol/Rayleigh backscatter profile at three wavelengths: 270 nm (third harmonic), 300 nm, and 600 nm. (C) High-resolution atmospheric absorption spectrum from an altitude of 4.5 km, measured in a DIAL configuration.” SOURCE: From J. Kasparian, M. Rodriguez, G. Méjean, J. Yu, E. Salmon, H. Wille, R. Bourayou, S. Frey, Y-B. Andre, A. Mysyrowicz, R. Sauerbrey, J.-P. Wolf, and L. Wöste, 2003, “White-light filaments for atmospheric analysis,” Science 301: 61. Reprinted with permission from AAAS.
FIGURE 3-24 “Remote detection and identification of bioaerosols with Teramobile system. The femtosecond laser illuminates a plume of riboflavin (RBF)-containing microparticles 45 m away (left). The backward-emitted two-photon-excited fluorescence, recorded as a function of distance and wavelength, exhibits the specific RBF fluorescence signature for the bioaerosols (middle) but not for pure water droplets” (simulating haze, right).122
SOURCE: G. Mejean, J. Kasparian, J. Yu, S. Frey, E. Salmon, and J.-P. Wolf, 2004, “Remote detection and identification of biological aerosols using a femtosecond terawatt lidar system,” Appl. Phys. B78: 535.
through spectral analysis of the H2O and O2 absorption lines present in the returned spectra, cloud humidity and temperature.123
Finally, studies with the Teramobile system attempted to simulate the detection of bio-active aerosols, through LIF detection of clouds of 1-µm water droplets containing either riboflavin or just pure water. The system excited riboflavin fluorescence in the blue-green region by two-photon processes, by virtue of the high peak power in the pulse. The experimental setup and data appear in Figure 3-24.124 A major advantage of the technique lies in the better atmospheric propagation of the near-IR light, compared to the short-wavelength light needed to directly excite the fluorescence. Presumably, tryptophan or nicotinamide adenine dinucleotide (NADH) fluorescence in bioactive aerosols could also be excited via higher order excitation processes.
Future work on femtosecond lidar systems may be able to employ newer, high-energy ultrafast sources using directly diode-pumped Yb-doped crystals, which would permit construction of less expensive, smaller, and more efficient sources.
In summary, the development of continuum sources driven by femtosecond-pulse lasers (and in some cases by nanosecond-pulse sources) has provided significant improvement in the measurement accuracy of path-averaged DIAL sensors. Continuum sources that are the result of coherent generation processes—and are thus precision frequency combs—have provided greatly enhanced gas detection sensitivities over other active sensors through the dual-comb technique, and have allowed broad spectral scans to be taken in under 100 microseconds. Filament-based white-light generation has enabled a new class of range-resolved DIAL atmospheric measurements. Intense near-IR femtosecond sources that excite bioactive molecules through multiphoton processes have been demonstrated. Finally, femtosecond laser technology, for short-range applications at least, can employ fiber-laser-based sources and can find use in UAV-based sensors.
122 J. Kasparian and J.-P. Wolf, 2008, “Physics and applications of atmospheric nonlinear optics and filamentation,” Optics Express 16:1.
123 R. Bourayou, G. Mejean, J. Kasparian, M. Rodriguez, E. Salmon, J. Yu, H. Lehmann, B. Stecklum, U. Laux, J. Eisloffel, A. Scholz, A.P. Hatzes, R. Sauerbrey, L. Wöste, and J.-P. Wolf, 2005, “White-light filaments for multiparameter analysis of cloud microphysics,” J. Opt. Soc. Am. B22: 369.
124 G. Mejean, J. Kasparian, J. Yu, S. Frey, E. Salmon, and J.-P. Wolf, 2004, “Remote detection and identification of biological aerosols using a femtosecond terawatt lidar system,” Appl. Phys. B78: 535.
Conclusion 3-11: A significant performance enhancement of both path-averaged and range-resolved differential absorption lidar can be facilitated with the use of femtosecond-pulse sources.
Recommendation 3-2: The general application of femtosecond sources should be encouraged at the development level and monitored worldwide.
Recommendation 3-3: Programs to deploy short-range sensors with fiber-laser-based femtosecond sources for use on unmanned aerial vehicles should be supported.
In most of this report, the physical processes under consideration can be fully understood from a semiclassical physics perspective. That is, the optical field is treated classically (Maxwell’s equations, the wave equation, etc.), and the interactions between the optical field and materials (targets, intervening media, detectors, etc.) are understood in terms of a classical electromagnetic field and either a continuous medium or quantized material model. To be sure, “photons” are commonly referred to in these discussions, but this is merely a convenient, if somewhat sloppy, expedient.125 Even in the very low light limit, “photon-counting,” shot noise limit, etc., the physics is that of a classical electromagnetic field interacting with matter.
In this section, however, potential new sensing modalities that can be realized by exploiting the truly quantum nature of the optical field are considered. There are quantum states of light that produce physical behavior that cannot be understood in terms of a classical stochastic electromagnetic field (e.g., antibunched light, squeezed states, and entangled photons) and these can be exploited.
When the term “quantum light” is used, the general reference is to optical fields exhibiting behavior—typically statistical behavior—not possible under the laws of classical physics. Many of the statistical properties of light that pose noise limitations for laser remote sensing are due to the stochastic nature of the electromagnetic field. For example, consider light (electromagnetic radiation of any frequency) from a thermal source (a lightbulb, or the Sun, say) falling onto a photodetector. Whether the detector is a linear mode device such as a p-i-n photodiode or a “photon counting” device such as a Geiger-mode avalanche photodiode, the resulting signal will fluctuate, and the statistics of these fluctuations are well understood. One aspect of the statistics of such a thermal source is “photon bunching”—that is, the photodetection events occur in “clumps.” These clumps are easy to understand by thinking of this optical field as a classical stochastic process. That is to say (taking the photon counting detection case as a concrete example), if a photodetection event has just occurred, chances are the field is fluctuating “high,” and another event is likely to follow immediately. If there has been a significant lapse since the last event, the field is probably fluctuating low, and more waiting time is likely. The photodetections “bunch.” Other sources exhibit fluctuations as well—for example, a laser well above threshold emits light exhibiting Poissonian photocount statistics. This is the same statistical behavior one would observe in the radiation from a classical current. In Poisson photostatistics the arrival times of photons are highly random, exhibiting no temporal correlation at all.
However, not all sources exhibit fluctuations that can be explained as simply photon bunching or Poisson statistics. Light emitted from “resonance fluorescence” exhibits photon antibunching.126 That is, the photodetection statistics are more regular than from a laser beam—like pearls on a string (see Figure 3-25). In a sense, then, antibunched light is more regular (or less noisy) than classical physics allows. Photon antibunching can be observed by sampling the light with a beamsplitter; it manifests as a minimum in the photodetection correlation function between the two output ports for zero delay time.
125 W.E. Lamb, 1995, “Anti-photon,” Appl. Phys. B 60: 77.
126 H.J. Kimble, M. Dagenais, and L. Mandel, 1977, “Photon antibunching in resonance fluorescence,” Phys. Rev. Lett. 39: 691.
FIGURE 3-25 Photon detections as a function of time for (top) antibunched, (middle) random, and (bottom) bunched light. At high loss rates, the photon statistics approach that of Poissonian classical light and there is no advantage over using a conventional laser. SOURCE: By J.S. Lundeen at en.wikipedia [GPL (http://www.gnu.org/licenses/gpl.html)], from Wikimedia Commons. See http://upload.wikimedia.org/wikipedia/commons/8/86/Photon_bunching.png.
Antibunched light can also exhibit sub-Poissonian photostatistics—that is, the variance in the photon count in a given interval is less than the mean. This raises the possibility that such “ultra-regular” light might be used to improve the signal to noise ratio in a remote sensing system. Unfortunately, detailed, but straightforward, calculations show that this potential advantage does not survive the very high loss rates inherent in long-range active imaging.
Another kind of non-classical light that has drawn considerable interest is squeezed light.127 Quantum mechanics states that there are fundamental limits to the fluctuations of the electromagnetic field. These fluctuations can be expressed in various ways, and for the purposes of this discussion, the focus is on fluctuations in amplitude and phase of the field, as this is relatively intuitive. The product of standard deviation of the number of photons and the standard deviation of the phase of the field is limited by a Heisenberg uncertainty principle of the form
For a coherent state, a good approximation to the output of a laser far above threshold, σn is proportional to the standard deviation of the electric field strength times the square root of the number of photons,128 and σφ is proportional to the standard deviation of the electric field strength divided by the square root of the number of photons; this limit is the so-called “standard quantum limit” to measurements of amplitude and phase. The essence of squeezed light is to reduce the variance in one parameter at the expense of the other. Reducing the photon number variance is possible at the expense of increasing the phase uncertainty and vice versa (see Figure 3-26). Squeezed light was first demonstrated in the late 1980s and has been used to demonstrate phase measurement below the shot noise limit,129 absorption spectroscopy below the vacuum state limit,130 and a variety of other measurements beyond the standard quantum limit. Sources of squeezed light with squeezing 10 dB below the shot noise limit have been demonstrated and are finding application in interferometry applications such as gravitational wave detection.131,132 Another application of squeezed light that attracted significant attention in the 1980s was
127 M.C. Teich and B.E.A. Saleh, 1989, “Tutorial: squeezed states of light,” Quantum Opt. l: 152.
128 Recall that the number of photons is proportional to the square of the electric field.
129 M. Xiao, L.-A. Wu, and H.J. Kimble, 1987, “Precision measurement beyond the shot-noise limit,” Phys. Rev. Lett. 59: 278.
130 E.S. Polzik, J. Carri, and H.J. Kimble, 1992, “Spectroscopy with squeezed light,” Phys. Rev. Lett. 68: 3020.
131 H. Vahlbruch, M. Mehmet, S. Chelkowski, B. Hage, A. Franzen, N. Lastzka, S. Goßler, K. Danzmann, and R. Schnabel, 2008, “Observation of squeezed light with 10-db quantum-noise reduction,” Phys. Rev. Lett. 100: 033602.
FIGURE 3-26 Squeezed states of light in polar representation. The length of the vector indicates the strength of the electric field (square root of the photon number), and the angle represents the phase of the field. (a) Typical representation of a coherent state where the shaded region represents the standard deviation of the photon number and phase. (b) “Phase-squeezed” field where the uncertainty in the phase is reduced at the expanse of larger uncertainty in the photon number. (c) “Number-squeezed” state where the uncertainty in the photon number is reduced at the expanse of larger uncertainty in the optical phase.
optical waveguide taps with infinitesimal insertion loss (enabling undetectable fiberoptic taps).133,134 A natural question is whether these laboratory demonstrations are extensible to real-world remote-sensing applications. For example, might spectroscopy with squeezed light improve the sensitivity of remote DIAL systems, or might a phase squeezed local oscillator be used to improve the sensitivity of coherent lidar?
Unfortunately, when examined in detail, these schemes tend not to realize the initial promise for remote sensing. For coherent lidar, Rubin and Kaushik examined the problem in detail and concluded that the signal-to-noise ratio for heterodyne laser radar with a coherent target-return beam and a squeezed LO beam is lower than that obtained using a coherent LO, regardless of the method employed to combine the beams at the detector.135,136
One of the most intriguing aspects of quantum physics is quantum entanglement. Entanglement is more than just classical correlation; rather, it is a degree of correlation and predictability that exceeds that possible in classical physics. A detailed discussion of entanglement is beyond the scope of this report, and the reader is referred to the extensive literature.137 In the past two decades, the resources of entanglement have been exploited in various ways in the burgeoning field of quantum information—quantum communication, quantum cryptography, quantum computation and quantum teleportation. Although these fascinating, and potentially groundbreaking, developments are beyond the scope of this study, quantum entanglement has also been proposed as a resource to enable new capabilities in remote sensing. These proposals have not engendered practical gains, but the field bears watching for the development of disruptive capabilities.
132 H. Vahlbruch, A. Khalaidovski, N. Lastzka, C. Gräf, K. Danzmann, and R. Schnabel, 2010, “The GEO600 squeezed light source,” Class. Quantum Grav. 27: 084027.
133 R. Bruckmeier, H. Hansen, S. Schiller, and J. Mlynek, 1997, “Realization of a paradigm for quantum measurements: The squeezed light beam splitter,” Phys. Rev. Lett. 79, 43.
134 J.H. Shapiro, 1980, “Optical waveguide tap with infinitesimal insertion loss,” Opt. Lett. 5: 351.
135 M.A. Rubin and S. Kaushik, “Squeezing the local oscillator does not improve signal-to-noise ratio in heterodyne laser radar,” 2007, Optics Lett. 32(11): 1369.
136 M.A. Rubin and S. Kaushik, 2009, “Signal-to-noise ratio in squeezed-light laser radar,” App. Opt. 48(23): 4597.
137 A. Peres, 1993, Quantum Theory: Concepts and Methods, Kluwer Academic Publishers.
One application area for entangled photon states that has been touted138 as offering a new capability is quantum superresolution. The idea is that in many respects, a maximally entangled state of N photons can behave as a single photon of wavelength λ/N. Experiments have demonstrated this behavior in several scenarios—namely, interferometry with fringe spacing λ/2N and near-field optics. Unfortunately, despite the promise of λ/N quantum superresolution, nobody has yet demonstrated far field (R » D2/λ) imaging performance exceeding conventional diffraction limits with entangled photon states. (Notionally, this would mean far field spatial resolution of λR/ND, where D represents either the transmitter diameter in a flying spot lidar or the receive aperture in a focal plane lidar.)
Another application of entangled photon states is measurement below the shot noise limit, reaching the Heisenberg limit. The standard quantum limit for phase measurement of an optical field scales as the inverse of the square root of the number of photons in the field, N:
Using entangled states of N photons it is theoretically possible to reach the Heisenberg limit to phase measurement which scales as the inverse of the number of photons:
Thus, for a field of 100 photons, one could improve on the performance of a measurement of optical phase by a factor of 10. Again, however, as with sub-Poissonian light, optical losses in a remote sensing system limit the effectiveness of this method. It has been shown139 that once losses exceed the modest level of about 6.7 dB, the phase measurement actually degrades with increasing N.
One concept employing entangled photons that does not appear to be subject to the same deleterious effects of loss is quantum illumination.140 In this technique, a series of single photon signal pulses is directed at a target. According to Llyod “each signal sent out is entangled with an ancilla, which is retained…Detection takes place via an entangling measurement on the returning signal together with the ancilla.”141 Quantum illumination with n bits of entanglement increases the effective signal-to-noise ratio of detection and imaging by a factor of 2n, an exponential improvement over unentangled illumination. The entangled detection serves as a sort of filter, improving the SNR performance by identifying the received photons as the same ones that were transmitted. What is remarkable is that this performance enhancement is retained, even in a long-range remote-sensing application where noise and loss completely destroy the entanglement between signal and ancilla. One application area where such a capability could have a significant impact is missile defense ladar, where one is trying to determine the presence or absence of a target at very long range. Imaging applications with resolution enhancements have also been discussed in the literature, based on similar arguments as the quantum super-resolution discussed above, but it must be made clear that this again applies to near-field geometries. The key barrier to the realization of quantum illumination is the entangling measurement for multibit entanglement. Although possible by the laws of physics, implementations of this type of photodetection are not readily available.
138 “Quantum Lidar -Remote Sensing at the Ultimate Limit,” 2009 AFRL-RI-RS-TR-2009-180 Final Technical Report, July.
139 M.A. Rubin and S. Kaushik, 2007, “Loss-induced limits to phase measurement precision with maximally entangled states,” Phys Rev. A 75: 053805.
140 S. Lloyd, 2008, “Enhanced sensitivity of photodetection via quantum illumination,” Science 321(5895): 1463.
141 S. Lloyd, 2008, “Enhanced Sensitivity of Photodetection via Quantum Illumination” Science 321 (5895): 1463-1465.
Another intriguing area that has attracted significant attention in the past few years is “ghost imaging.”142,143 In ghost imaging, one exploits the correlations between two light beams in an active imaging scenario. One beam illuminates and is scattered off a target, with the scattered light being collected by a nonresolving “bucket” detector. The second, correlated beam illuminates a spatially resolving detector. The image is formed by cross-correlations of the two photodetection signals. Thus, neither beam produces a target image by itself—the beam interacting with the target provides no spatial resolution, and the beam falling on the detector array has not interacted with the target.
The correlations between the two beams can be either classical or quantum in nature, and ghost imaging can use either direct or phase-sensitive coherent detection. The details in ultimate system performance (spatial resolution, field of view, image contrast, and SNR) do depend on the nature of the correlations and the detection technique. The potential benefits of ghost imaging in practical applications are principally in the expanded design trade space afforded by the technique, but it remains to be seen what role such imaging will have to play in practical systems.
Nonquantum Advanced Techniques
Although not strictly advanced quantum techniques, there are other nonconventional advanced concepts that should be addressed here as well.
Metamaterials (engineered materials whose properties are determined by their physical rather than molecular structure) have been used to achieve electromagnetic properties that, while allowed by physics, are generally not present in nature. These include control of the dielectric permittivity and magnetic susceptibility of materials to create negative refractive index (NRI) materials,144 which have been shown to have unusual optical properties, enabling “perfect” near-field imaging resolution far beyond the diffraction limit of the wavelength,145 as well as “cloaking devices” that render the metamaterial invisible. There are significant technical obstacles to overcome for these materials before applications become realizable, such as the deleterious effects of absorption, but the field should be monitored closely for breakthroughs in the technical barriers. More fundamental, however, is to understand the validity of the actual applications. There have been claims,146,147 for example, that the perfect lensing capability of NRI materials enables high-resolution imaging at long range beyond the diffraction-limited (λ /D) resolution of the optics. This is based on a misconception148 about the capabilities of NRI telescopes and should not be considered a viable approach. However, there may be other very beneficial applications of NRI and “perfect lenses” in general, and the field should be closely monitored and supported.
In summary, the academic community has conceived and in some cases demonstrated numerous intriguing concepts in quantum imaging that exploit the unique physics of the quantized electromagnetic field. Several of these concepts at first examination appear to offer profound advantages for active remote-sensing systems. Despite the potential promise of exploiting the quantum nature of light, however, most of these concepts can be shown not to provide a real advantage for remote-sensing systems.
142 Y. Shih, 2009, “The physics of ghost imaging,” arXiv:0805.1166v5 [quant-ph], Sep. 29, 2009.
143 B.I. Erkmen and J.H. Shapiro, 2010, “Ghost imaging: from quantum to classical to computational,” Advances in Optics and Photonics 2: 405.
144 S.A. Ramakrishna and T.M. Grzegorczyk, 2008, Physics and Applications of Negative Refractive Index Materials, CRC Press.
145 J.B. Pendry, 2000, “Negative refraction makes a perfect lens,” Phys. Rev. Lett. 85: 3966.
146 J. May and A. Jennetti, 2006, “Telescope resolution using negative refractive index materials,” Proceedings of SPIE 5166: 220, UV/Optical/IR Space Telescopes: Innovative Technologies and Concepts.
147 J. May and S.D. Stearns, 2011, “Imaging system using a negative index of refraction lens,” US Patent 8017894.
148 S. Stanton, B. Corrado, and T. Grycewicz, 2006, Comments on Negative Refractive Index Materials and Claims of Super-Imaging for Remote Sensing, Aerospace Report No. TOR-2006(3907)-4650.
Conclusion 3-12: Advanced quantum approaches, including nonclassical photon statistics, squeezed light, and entangled photons, while intriguing and potentially promising, are not currently of added utility for practical remote-sensing systems. Nevertheless, it is important to pursue and monitor this family of approaches, since new concepts with breakthrough capabilities may emerge.
The following general conclusions regarding emerging active EO systems are derived from discussions in this chapter taken as a whole.
Conclusion 3-13: Emerging active EO systems show strong advantages (signal to noise ratio gain, phase compensation and thinner, lighter apertures) at the cost of increased system complexities (computational processing costs, narrow linewidth lasers, etc.).
Conclusion 3-14: Emerging active EO technologies can complement current conventional ladar systems.
Conclusion 3-15: High-level, active EO emerging technologies will most likely be pursued through funding at university, government, or industry laboratories, with indicators given by publications and presentations.
Conclusion 3-16: Coherent active EO systems will continue to develop for applications that require access to the optical field (not just intensity).
Conclusion 3-17: Large potential markets may propel progress for emerging active EO technologies for commercial applications.