3

Emerging Electro-Optical Technologies

This chapter focuses on emerging active electro-optical (EO) technologies that are rapidly developing and whose implementation is still evolving. The focus is on several coherent systems such as temporal and spatial heterodyning, synthetic aperture ladar, multiple-input, multiple-output (MIMO) imaging, and speckle imaging. Also discussed are emerging approaches in femtosecond sources and quantum technologies. Although these technologies are not fully matured, they may have a significant impact on the field of active EO sensing in the next 5-10 years and beyond.

MULTIWAVELENGTH LADAR

Color can be a very useful discriminant. People experience this when they compare black and white pictures to color pictures. Color distinctions are based on the difference in reflectivity versus wavelength. Active multispectral EO can complement conventional ladars when viewing solid targets by adding additional surface material discrimination information not available with just 2-D or 3-D imaging. Active multispectral imaging for targets can have an advantage over passive imaging of targets because one can control the illumination source. Therefore even at night near-IR (NIR) wavelengths can be used, whereas there would not be enough signal to use passive multispectral at these wavelengths. An active EO multispectral sensor will combine the benefits of conventional ladar and multispectral wavelengths in a band that has significant color variation in its reflectance. Conventional ladars, and some passive imaging sensors, utilize the shape of a target for detection and/or identification. Two different approaches have been used to deploy active multispectral sensors against hard targets. One is to use the laser wavelengths that are easy to generate, for example 1.064 µm and around 1.5 µm, and take whatever recognition benefit one can gain. The second approach is to determine what wavelengths offer the best active multispectral recognition probabilities and make the lasers appropriate to these discriminants. In the second case, a class separability metric is formulated and optimal wavelengths are selected.1Figure 3-1 shows reflectivity versus wavelength and angle for six different materials. Figure 3-2 shows reflectivity versus wavelength for leaves, showing significant spectral reflectivity changes versus wavelength in the NIR.

Multispectral and hyperspectral sensing depend on variation in reflectivity versus wavelength for surface materials of the object being viewed. The main fundamental limit is that reflectance, or absorption, reflects only the surface properties of a solid object being viewed.

The key technologies for active multispectral imaging are the illuminator, a multispectral laser to illuminate an object at more than one wavelength, and the associated detector technologies.

To achieve active multispectral imaging, one needs laser sources that cover all of the wavelengths of interest. Therefore, developing active multispectral sensing requires that laser sources either have multiple laser lines or be very broadband.

____________________

1 M. Vaidyanathan, T.P. Grayson, R.C. Hardie, L.E. Myers, and P.F. McManamon, 1997, “Multispectral laser radar development and target characterization,” Proc. SPIE, 3065: 255.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 107
3 Emerging Electro-Optical Technologies This chapter focuses on emerging active electro-optical (EO) technologies that are rapidly developing and whose implementation is still evolving. The focus is on several coherent systems such as temporal and spatial heterodyning, synthetic aperture ladar, multiple-input, multiple-output (MIMO) imaging, and speckle imaging. Also discussed are emerging approaches in femtosecond sources and quantum technologies. Although these technologies are not fully matured, they may have a significant impact on the field of active EO sensing in the next 5-10 years and beyond. MULTIWAVELENGTH LADAR Color can be a very useful discriminant. People experience this when they compare black and white pictures to color pictures. Color distinctions are based on the difference in reflectivity versus wavelength. Active multispectral EO can complement conventional ladars when viewing solid targets by adding additional surface material discrimination information not available with just 2-D or 3-D imaging. Active multispectral imaging for targets can have an advantage over passive imaging of targets because one can control the illumination source. Therefore even at night near-IR (NIR) wavelengths can be used, whereas there would not be enough signal to use passive multispectral at these wavelengths. An active EO multispectral sensor will combine the benefits of conventional ladar and multispectral wavelengths in a band that has significant color variation in its reflectance. Conventional ladars, and some passive imaging sensors, utilize the shape of a target for detection and/or identification. Two different approaches have been used to deploy active multispectral sensors against hard targets. One is to use the laser wavelengths that are easy to generate, for example 1.064 μm and around 1.5 μm, and take whatever recognition benefit one can gain. The second approach is to determine what wavelengths offer the best active multispectral recognition probabilities and make the lasers appropriate to these discriminants. In the second case, a class separability metric is formulated and optimal wavelengths are selected. 1 Figure 3- 1 shows reflectivity versus wavelength and angle for six different materials. Figure 3-2 shows reflectivity versus wavelength for leaves, showing significant spectral reflectivity changes versus wavelength in the NIR. Multispectral and hyperspectral sensing depend on variation in reflectivity versus wavelength for surface materials of the object being viewed. The main fundamental limit is that reflectance, or absorption, reflects only the surface properties of a solid object being viewed. The key technologies for active multispectral imaging are the illuminator, a multispectral laser to illuminate an object at more than one wavelength, and the associated detector technologies. To achieve active multispectral imaging, one needs laser sources that cover all of the wavelengths of interest. Therefore, developing active multispectral sensing requires that laser sources either have multiple laser lines or be very broadband. 1 M. Vaidyanathan, T.P. Grayson, R.C. Hardie, L.E. Myers, and P.F. McManamon, 1997, “Multispectral laser radar development and target characterization,” Proc. SPIE, 3065: 255. 107

OCR for page 107
108 LASER RADAR FIGURE 3-1 Reflectivity versus wavelength and angle for six different materials. SOURCE: D.G. Jones, D.H. Goldstein, and J.C. Spaulding, 2006, “Reflective and polarimetric characteristics of urban materials,” AFRL Tech Report, AFRL-MN-EG-TP-2006-7413. FIGURE 3-2 Leaves from trees have different spectral qualities. Plot shows leaf reflectivity and transmission. SOURCE: Reprinted from Remote Sensing of Environment, 64/3, G.P. Asner, Biophysical and Biochemical Sources of Variability in Canopy Reflectance, 234-253, 1998, with permission from Elsevier.

OCR for page 107
EMERGING ELECTRO-OPTICAL TECHNOLOGIES 109 As an active system, detection of NIR spectral bands is made possible even at night, and NIR bands have a significant amount of reflectivity variation, or color. This sensor type will be especially useful in high clutter situations or situations where the target is partially obscured. Active multispectral sensing does not require precision angle/angle resolution because the surface material discrimination is based on the ratio of reflectance at various wavelengths, not on the shape of an object. Therefore, to the first order, active multispectral sensing is not dependent on the diffraction limit, so it can be a useful long-range discriminant. It is also not dependent on exact object shape, which explains its usefulness when a target is partially obscured. The only angular size effect is based on color mixing over a pixel when pixels become larger. One disadvantage is that active multispectral requires laser illumination at all wavelengths being used. Three bands would require three lasers, or the ability to divide a single laser into sources at each of the three wavelengths, or else sequencing through the bands. An optical parametric oscillator (OPO) can be used to shift the wavelengths transmitted over time, as long as the object being viewed is stationary over that time period. Alternatively, a broadband laser source containing many wavelengths can be used. A broadband laser has the disadvantage of spreading the laser light over that broadband, reducing the available light at any particular wavelength. Active multispectral EO sensing containing multiple lasers or a single laser shifted to multiple wavelengths, has a low scientific barrier to entry. Many countries have excellent laser technology, which is the first requirement to manufacture multiline or broadband lasers. As a result, the comparative state of the art in this area is not very meaningful. Once it is decided to provide laser sources at multiple wavelengths, it is relatively straightforward to make an active multispectral sensor. That said, development of this technology also requires investment in the associated receivers and their integration into a single unit that meets constraints of size, weight, and power (SWaP) and cost. TEMPORAL HETERODYNE DETECTION: STRONG LOCAL OSCILLATOR As described in Chapter 2, laser radar systems using direct detection (as in 3-D flash imaging) can be limited by the noise in the detector. One method for reducing the effect of detector noise was taken from the radio frequency (RF) community. Heterodyne detection is a method originally developed in the RF domain as a way to convert a received signal to a fixed intermediate frequency that can be more conveniently—and more noiselessly—processed than the original carrier frequency. This practice has been similarly adapted to optics as a way to detect weak signals. In optical heterodyne detection, a weak input laser signal is mixed with a local oscillator (LO) laser wave by simultaneously illuminating a detector with both signals. For temporal heterodyne it is very important to match the illumination angles of both the LO light and the return signal light across the detector, or else spatial fringes develop. High spatial frequency fringes smaller than a detector can average the interfering signal to zero across the detector. Figure 3-3 illustrates the arrangement of a simple optical heterodyne receiver. A laser transmits a coherent waveform of light toward a target. The reflected light beam is mixed with the reference laser (local oscillator) beam at a beam splitter in the receive optical path and the beams are superimposed on the detector. The resulting photocurrent is proportional to the total optical intensity, which is the square of the total electric field amplitude. If the LO power is increased above all other noise sources, the signal-to-noise ratio becomes limited only by from the laser source by 𝜔 𝑖𝑓. and the resulting optical intensity has fluctuations at the difference and sum shot noise of the return signal. For temporal heterodyne detection, the reference laser frequency is offset frequencies of the two fields and at double the frequency of each of the fields. The LO frequency is offset so it is possible to determine whether any velocity is toward or away from the sensor. The coherent

OCR for page 107
110 LASER RADAR FIGURE 3-3 Simple heterodyne (coherent) laser radar configuration. receiver is usually designed to isolate the difference frequency component from fluctuations and noise at other frequencies. 2 For traditional heterodyne detection, the LO field strength must be much higher than the return signal strength in order to mitigate the effects of various noise sources. However, if the detector is sufficiently sensitive, there is little need to mitigate detector noise sources by using a strong LO. If the field strengths are similar, the noise mitigation benefit of a strong LO is lost, but the frequency comparison and narrowband filtering features are still met. TEMPORAL HETERODYNE DETECTION: WEAK LOCAL OSCILLATOR Traditional heterodyne detection with a strong LO has an added challenge if arrays of detectors are desired. While a high-resistance receiver, such as a focal plane array with relatively low bandwidth, can operate with low LO power, other standard high-bandwidth IR detectors used in heterodyne detection systems, such as a linear GHz-bandwidth photodiode, can require as much as 1 mW of LO power or more to reach the shot noise limit. This may result in unacceptable heat loads of greater than 10 W for large arrays of 10,000 elements. 3 A 256 × 256 array would require even larger LO powers. Using a photon counting detector allows use of a low-power LO and potentially the ability to use the same detector for both coherent and direct detection. In order to use photon counting detectors such as Geiger-mode (GM) avalanche photodiodes (APDs), the LO strength must be matched to the signal strength. In this case, the noise mitigation effects from the strong LO are lost, but the ability to detect frequency shifts is maintained. The block diagram for weak LO heterodyne detection is the same as that shown in Figure 3-3, with the detector being replaced by a GM-APD array. A laser transmits a coherent waveform toward a target, the reflected beam is mixed with the LO at a beam splitter in the receive optical path, and the beat signal is detected by the receiver. The object is imaged onto the detector array, but in this case the readout is simply the photon arrival time. The size of the angle/angle resolution element depends on the detector angular subtense and the size of the focused optical spot. To detect more than one photon per angle/angle resolution element per reset of the detector, the receive optics can be constructed so that each focused optical spot is spread across a group of pixels (called a macropixel).4 Each macropixel then acts as a photon-number-resolving detector whose dynamic range is equal to the number of pixels contained in the macropixel. For example, a 32 × 32 GM-APD can be broken up into an 8 × 8 array of photon- 2 P. McManamon, 2012, “Review of ladar: A historic, yet emerging, sensor technology with rich phenomenology,” Optical Engineering 51(6): 060901. 3 L. Jiang and J. Luu, “Heterodyne detection with a weak local oscillator,” Applied Optics 47(10), 1486-1503, (2008). 4 L. Jiang et al., 2007, “Photon-number resolving detector with 10-bits resolution,” Phys. Rev. A, 75(6): 062325.

OCR for page 107
EMERGING ELECTRO-OPTICAL TECHNOLOGIES 111 FIGURE 3-4 Power spectral density (PSD) of the detected current for a temporal heterodyne laser radar receiver, shown for (a) 1 pulse and (b) 100 pulses of incoherent averaging. SOURCE: L. Jiang, E. Dauler, and J. Chang, 2007, “Photon-number-resolving detector with 10 bits of resolution,” Physical Review A 75 (6):062325. number-resolving detector macropixels. Each macropixel can be a 4 × 4 array of pixels, or 16 pixels, resulting in a 4-bit dynamic range. When plotted as a function of time for a single macropixel, the photon arrival times map out the frequency of the beat signal. The detector readout rate must be high enough to sample the beat frequency between the LO and the return signal, or something must be done to mitigate aliasing. For a single pixel detector this means the beat frequency cannot be sampled if it is higher than half the array read out rate, but the macropixel allows detector bandwidth-limited sampling of the time of arrival of photons. With large enough macropixels, beat frequencies can be sampled up to the detector bandwidth rather than being limited by the detector array frame rate. In experiments, the return signal for each pulse is coherently integrated—that is, a fast Fourier transform of the photon arrival times for each pulse is performed—then incoherently averaged over LO and transmit signals are offset in frequency by 𝑓 𝑜𝑓𝑓𝑠𝑒𝑡 , the beat signal is located at 𝑓 𝑜𝑓𝑓𝑠𝑒𝑡 . If the multiple pulses—that is, the power spectral densities (PSDs) are summed over all collected pulses. If the target has a velocity component along the ladar’s line of sight, the PSD should show a peak at the sum of the offset frequency and the target’s Doppler frequency. The PSD of several return pulses can be averaged to smooth out the curve, as shown in Figure 3-4. There are important limitations to this technique that limit its applicability and the measurement concept of operations (CONOPS). Principal among these are the dynamic range and visibility constraints and requirements on LO power control, as well as the timescale constraints that link photocount rate, beat frequency, and vibrational frequency. These limitations were discussed in more detail in the section on laser vibrometry in Chapter 2. As indicated in the preceding section, it is critical to maintain coherence between the wavefronts of the signal and the LO. While it may be possible to minimize the effects on the system side, the transmission medium may ultimately determine the performance of heterodyne detection. In addition to large fluctuations of attenuation produced by things like fog, rain, and smoke, inhomogeneities in the atmosphere itself may produce wavefront distortion. As a result of this wavefront distortion caused by air turbulence, a large portion of the light power can be converted into higher order modes and makes it

OCR for page 107
112 LASER RADAR difficult to match the LO and return signal patterns. Therefore, heterodyne detection over large distances through the atmosphere is inherently difficult. Unlike direct detection receivers, the dominant noise source in heterodyne or coherent receivers is the shot noise generated by the local oscillator beam. For a matched filter receiver, that effective noise is equal to one detected photon per resolution element (in both time and space). 5 In order to efficiently contend with this inherent noise, the coherent detection system design is most efficient by ensuring that on the order of 1 (or a few) signal photon(s) is detected per angle/angle/range resolution cell per pulse. Below one detected signal photon per resolution element per pulse, the required transmitter power scales as the square root of the pulse repetition frequency (PRF). Therefore, if a 10 W, 100 Hz transmitter is the optical coherent design (giving ~1 photon per resolution element), then 100 W would be required for a higher PRF, 1 KHz system. This can lead to higher pulse energies and lower PRFs being the optimal energy efficiency solution for many measurement problems, which may not be as feasible as other designs, whether technologically or where low SWaP and/or high reliability are required. Direct detection receivers do not have this fundamental noise constraint and can have much lower than one-detected- noise-photon per resolution element per pulse. 6 Another fundamental limit of heterodyne detection is the effect of speckle present in highly coherent light. As discussed, the LO and signal must be temporally coherent. They also need to be spatially coherent across the face of the detector and avoid having an additional linear phase difference, or they will produce spatial fringes across a detector, destroying the signal. In many usage scenarios the signal is reflected from optically rough surfaces, producing randomized phases and a “salt and pepper” intensity modulation on the image known as speckle, 7 as also occurs in microwave synthetic-aperture radar, if the illumination source is sufficiently temporally coherent, as for a laser. Techniques such as averaging independent speckle realizations can be performed to reduce the speckle. 8 Coherent detection techniques require narrow linewidth lasers with coherent illumination. The coherence length must be larger than the two-way round trip time of the pulse, or a sample of the outgoing signal must be stored until the signal return and used to develop the LO. In the second case the coherence length of the illuminating laser must still be longer than twice the depth of the target. A method of shifting the LO frequency is required for knowledge of the direction of the velocity. This is usually accomplished with an acousto-optical modulator. Larger, high bandwidth arrays are also important. Finally, adaptive optics may help reduce the effects of turbulence. Published literature on the development of laser systems described above would be an indicator. Indicators of progress in heterodyne detector systems may also be found in literature and/or research about some of the applications of coherent ladar (vibrometry, spectroscopy, synthetic aperture ladar, etc.). Additional indicators may be noticed by work on high bandwidth arrays. For heterodyne approaches that do not have sensitive detectors, alternating current (AC) couple array work would also be an indicator. There are a number of advantages to using heterodyne detection rather than direct detection. First, as mentioned above, a weak signal can be amplified by a strong LO and the signal-to-noise ratio with heterodyne detection depends only on the signal strength, the detector quantum efficiency, and the signal bandwidth. These results are true of both temporal and spatial heterodyne detection. This means that high gain detectors like those used in the direct detect 3-D ladars are not necessarily needed, and shot-noise- limited detection is possible with low-tech photodiodes. Second, the heterodyne receiver provides high discrimination against background light and other radiation. Unlike direct detection, where background light causes problems when it is of the same order of magnitude as the signal power, in heterodyne detection the background light must be comparable to the LO power, which in many cases is made quite high and can be set to dominate the background light. 5 P. Gatt and S. Henderson, 2001, “ Laser radar detection statistics: A comparison of coherent and direct detection receivers,” Proc. SPIE, 4377: 251. 6 Personal communication from Sammy Henderson, President, Beyond Photonics, April 20, 2013. 7 C. Dainty, ed., 1984, Laser Speckle and Related Phenomena, Springer Verlag. 8 J.W. Goodman, 2007, Speckle Phenomena in Optic: Theory and Applications, Roberts & Co.

OCR for page 107
EMERGING ELECTRO-OPTICAL TECHNOLOGIES 113 Furthermore, the coherent detection bandwidth can be controlled by a postdetection electronic filter that can be as narrow as desired. 9 Heterodyne detection usually has much narrower receiver bandwidths than direct detection. In temporal heterodyne detection, a third feature of coherent detection takes advantage of the fact that the amplified output occurs at the difference frequency between the LO and signal beams. This sensitivity to frequency difference makes it possible to measure the phase or frequency shift of the signals and hence obtain Doppler measurements for moving targets. This type of measurement is not directly possible with direct detect systems, which only measure intensity. It takes multiple range measurements to indirectly measure velocity using direct detection. While heterodyne detection offers the potential for highly sensitive measurements, there are a number of practical limitations to the scheme. Coherent ladars are essentially interferometers. If the phases of the two beams are not well matched, the fringes will oscillate back and forth and the signal will be washed out. This feature lowers efficiency in temporal heterodyne. For heterodyne detection, the two fields from the transmitter and the LO must be spatially locked in phase at the detector. The two beams must be coincident and, to provide maximum signal-to-noise ratio, their diameters must be equal. The beams must propagate in the same direction and the wavefronts must have the same curvature. For spatial heterodyne the LO and return signal propagate in slightly different directions, causing spatial fringes to develop. Finally, for temporal heterodyne, the beams must be identically polarized, so that their electric vectors will be coincident. 10 These requirements are called “coherent superposition,” and failure to meet them can cause a loss of signal reception. A good way to deal with some of these constraints is to use a single laser for both the LO and signal and, for temporal heterodyne, coupling with an acousto-optic modulator to create the frequency difference between the two. In this configuration, the relative phase of the two beams is fairly stable even if the source does not exhibit low phase noise. In addition to the superposition requirements, the laser must be coherent and the coherent length must either be longer than the round trip distance or at least longer than the depth of the target as long as master oscillator drift is compensated by some technique, such as delaying a sample of the master oscillator to use as the local oscillator. Heterodyne (or coherent) detection can be used in a number of applications such as coherent Doppler ladar measurements, vibrometry, spectroscopy, and very high resolution imaging techniques such as synthetic aperture ladar and inverse synthetic aperture ladar. Several of these applications will be discussed in more detail in later sections. Using a photon counting detector allows use of a low power local oscillator and potentially the ability to use the same detector for both coherent and direct detection. Conclusion 3-1: Advantages of shot-noise-limited detection, high background discrimination, and measurement of phase or frequency shifts in addition to intensity make heterodyne detection a compelling and promising technology. Conclusion 3-2: Heterodyne detection can be used with a weak local oscillator if detectors are already sensitive enough so that a strong local oscillator is not required as a method of increasing the receiver sensitivity. SYNTHETIC-APERTURE LADAR According to Voxtel “Conventional optical imagers, including imaging. ladars, are limited in angle/angle spatial resolution by the diffraction limit of the telescope aperture. As the aperture size increases, the angle/angle resolution improves; as the range increases, spatial resolution degrades. Thus, high-resolution, real-beam imaging at long ranges requires large telescope 9 S. Jacobs, 1988, “Optical heterodyne (coherent) detection,” Am. J. Phys. 56(3): 235. 10 O.E. DeLange, 1968, “Optical heterodyne detection,” IEEE Spectrum 5(10): 77.

OCR for page 107
114 LASER RADAR diameters. Imaging resolution is further dependent on wavelength, with longer wavelengths producing coarser angle/angle resolution. Thus, the limitations of diffraction are most apparent in the radio-frequency domain (as opposed to the optical domain).” 11 Buell et al. describes “A technique known as synthetic-aperture radar (SAR) was invented in the 1950s to overcome this limitation: In simple terms, a large radar aperture is synthesized by processing the pulses emitted at different locations from a radar aperture as it moves, typically on an airplane or a satellite. The resulting image resolution is characteristic of significantly larger apertures. For example, the Canadian Radar Sat–II, which flies at an altitude of about 800 km, has an antenna size of 15 × 1.5 meters and operates at a wavelength of 5.6 cm. Its real-aperture resolution is on the order of 1 kilometer, while its synthetic-aperture resolution (with a transmission bandwidth of 100 MHz) is as fine as 3 m. This resolution enhancement is made possible by keeping track of the phase history of the radar signal as it travels to the target and returns from various scattering centers in the scene. The final synthetic-aperture radar image is reconstructed from many pulses transmitted and received during a synthetic-aperture evolution time using sophisticated signal processing techniques. 12” An alternative description of what is happening to create a high resolution synthetic- aperture ladar (SAL) image is that at a given instant a laser waveform is transmitted from either a monostatic or bistatic aperture. The laser light reflects off the target, and the return field is captured using either spatial or temporal heterodyne. To date, almost all SAL work has been done using temporal heterodyne, but the key issue is capturing a sample of the pupil plane field as large as the real receive aperture. At another location shortly later the same thing is done, and then again and again as the transmit and receive apertures move. If motion issues can be compensated, then a physically large representation of the pupil plane field is captured, which can then be Fourier transformed to form a high-resolution image. Since for monostatic operation both the transmitter and the receiver move, the synthesized pupil plane image is almost twice the distance flown. In recent years, researchers have investigated ways to apply the techniques and processing tools of RF SARs to optical laser radars. According to Buell et al. “There are several motivations for developing such an approach in the optical or visible domain. The first is simply that humans are used to seeing the world at optical wavelengths. Optical SAL would potentially be easier than microwave radar for humans to interpret, even without specialized training. Second, optical wavelengths are around 20,000 times shorter than RF wavelengths, and can therefore provide much finer spatial resolution and/or much faster imaging times.” 13 A typical synthetic aperture motion distance will be many kilometers for SAR but only meters for SAL, assuming the same target resolution requirement. Over time, new applications may arise for which additional resolution requirements impose longer motion distances on SAL. platform with a transmitter-receiver module moves with velocity 𝑣 while it illuminates a target with light The SAL concept is illustrated in Figure 3-5. This paragraph is drawn from Beck et al. “A travel of the platform is given approximately by 𝛿𝑥 = 𝜆/2Δ𝜃, where the change in azimuth angle Δ𝜃 as of mean wavelength λ and receives the scattered light. The imaging angular resolution in the direction of seen by an observer at the target at range 𝑅 is Δ𝜃 = 𝐷 𝑆𝐴 /𝑅 and the synthetic-aperture length developed during flight time T is 𝐷 𝑆𝐴 = 𝑣 × 𝑇”. 14 The range resolution, in the orthogonal direction, is determined by the bandwidth, B, of the transmitted waveform range resolution 𝛿𝑦 = 𝑐/2𝐵, so long as the receiver can measure the returned bandwidth. Coherent (heterodyne) detection is used to measure the phase history of the returned ladar signals throughout the synthetic-aperture formation time. According to Beck et al. “Two of the main types of synthetic aperture active sensors are (1) spotlight mode (Figure 3-5), in which 11 Voxtel, http://www.virtualacquisitionshowcase.com/document/602/brochure. Accessed on March 14, 2014. 12 W.F. Buell, N.J. Marechal, J.R. Buck, R.P. Dickinson, D. Kozlowski, T.J. Wright, and S.M. Beck, 2004 “Synthetic Aperture Imaging Ladar” Crosslink (Summer): 45-49. 13 Ibid. 14 S. Beck, J. Buck, W. Buell, R. Dickinson, D. Kozlowski, N. Marechal, and T. Wright, 2005 “Synthetic- aperture imaging laser radar: Laboratory demonstration and signal processing,” Appl. Opt. 44(35): 7621-7629.

OCR for page 107
EMERGING ELECTRO-OPTICAL TECHNOLOGIES 115 FIGURE 3-5 Spotlight synthetic-aperture ladar (SAL). The illuminating spot size D spot , at the target is determined by the diffraction limit of the transceiver optic, with diameter D t , The resolution in the direction of travel (azimuthal, 𝛿𝑥) is determined by the wavelength corresponding to the imaging resolution of a conventional imager with the same aperture. illuminating spot size at the target.) The resolution in the orthogonal direction (range, 𝛿𝑦) is and evolved aperture length. (For strip-map SAL, this length is limited by the spot determined by the transmitted waveform bandwidth, B. The angle Δ𝜃 is the angle subtended by the synthetic aperture as viewed from an image element at the target. To obtain the resolution in the ground plane, a simple rotation from the slant plane to the ground plane is performed. SOURCE: S.M. Beck, J.R. Buck, W.F. Buell, R.P. Dickinson, D.A. Kozlowski, N.J. Marechal, and T.J. Wright, 2005, “Synthetic aperture imaging laser radar: laboratory demonstration and signal processing,” Appl. Opt. 44(35): 7621. the transmitted beam is held at one position on the target for the coherent dwell period and then moved to another spot, and (2) strip mode, in which the transmitted beam is continuously scanned across a target. Most of this discussion applies to either case. In strip mode, the aperture synthesis time is limited by the beamwidth of the sensor and the velocity of the platform (the time during which the target is illuminated). Smaller real apertures result in larger illuminating spots and concomitantly longer synthetic apertures, which leads to the nonintuitive (from a conventional imaging perspective) result that the azimuthal resolution in strip-mode SAL is half of the real-aperture diameter”—the smaller the transmitter aperture the better the resolution. 15 Bistatic configurations, where the transmit and receive apertures are not collocated, can also be considered. Figure 3-6 shows an early embodiment of a laboratory SAL demonstration system developed at The Aerospace Corporation based on wide-bandwidth frequency-modulated continuous-wave (FMCW) waveforms. According to Beck et al. “The components employed are all common, off-the-shelf, telecommunication fiber-based devices allowing a compact system to be assembled that can easily be isolated from environmental effects: The source is split into five paths, for target illumination, target–LO, reference, reference–LO, and wavelength reference. A circulator is used to recover the return pulse, which 15 Ibid.

OCR for page 107
116 LASER RADAR FIGURE 3-6 Component layout for the fiber-based SAIL system. The components employed are all common, off-the-shelf, telecom, fiber-based devices, allowing a very compact system to be assembled that can be easily isolated from environmental effects. The source is split into five paths for the target illumination, target-local oscillator (LO), reference (REF), reference-local oscillator, and wavelength reference. A circulator is used to recover the return pulse, which is mixed with the target-local oscillator in a balanced heterodyne detector. The reference channel is delayed by a fiber loop and then mixed with the reference-local oscillator in a similar manner. The synthetic aperture is created by using a translation stage to scan the aperture across the target. A molecular wavelength reference cell (hydrogen cyanide - HCN) provides a pulse to pulse frequency absolute reference. SOURCE: from Walter Buell (variation of Figure 3 in S.M. Beck, J.R. Buck, W.F. Buell, R.P. Dickinson, D.A. Kozlowski, N.J. Marechal, and T.J. Wright, 2005, “Synthetic aperture imaging ladar: laboratory demonstration and signal processing,” Appl. Opt. 44(35): 7621). is mixed with the target–LO in a balanced heterodyne detector. The reference channel is delayed by a fiber loop and then mixed with the reference–LO in a similar manner. The synthetic aperture is created by use of a translation stage to scan the aperture across the target.” 16 Figure 3-7 shows a typical laboratory image from the system of Figure 3-6 (see caption for details). In 2003, DARPA initiated the Synthetic Aperture Lidar Tactical Imaging (SALTI) program with the aim of achieving high-resolution synthetic aperture lidar imagery from an airborne platform at tactical ranges, moving SAL from the laboratory to operationally relevant environments. The performance characteristics of the SALTI program are classified, but the system did achieve synthetic aperture resolution exceeding the real-aperture diffraction-limited resolution of the system. The program progressed through DARPA Phase 3 before being terminated in 2007. Lockheed Martin Coherent Technologies (LMCT) also pursued an airborne SAL system 17 and presented ground and airborne results in the Laser Sensing and Communications, (LS&C) meeting in Toronto in 2011 (Figure 3-8). While the RSAS SALTI system employed a fiber-based linear FMCW system like the Aerospace system, the LMCT system used a wide-bandwidth (7 GHz, 2 cm resolution), pulse-coded approach. 16 Ibid. 17 B. Krause, J. Buck, C. Ryan, D. Hwang, P. Kondratko, A. Malm, A. Gleason, and S. Ashby, 2011, “Synthetic aperture ladar flight demonstration,” in CLEO:2011, Laser Applications to Photonic Applications, OSA Technical Digest (CD) (Optical Society of America, 2011), PDPB7.

OCR for page 107
EMERGING ELECTRO-OPTICAL TECHNOLOGIES 117 FIGURE 3-7 SAL-boat target and mosaicked SAL image results. The real-aperture diffraction-limited illuminating spot size is represented at the right. A picture of the target is shown at the left. This target consists of the same retroreflective material used for the triangle images, placed behind a transparency containing the negative of the sailboat image. The image was formed by scanning the target in overlapping strips and then pasting these images together to form a larger image. Some degradation is present due to the phase- screening effects of the transparency film; however, the pattern of the retroreflective material is clearly visible. The range to target in this example was ~2 m, with range diversity achieved by placing the target at a 45 degree angle with respect to the incident light. SOURCE: S.M. Beck et al. op. cit. FIGURE 3-8 SAL demonstration images. (a) Photograph of the target. (b) SAL image, no corner cube glints. Cross range resolution = 3.3 cm, 30× improvement over the spot size. Total synthetic aperture = 1.7 m, divided into 10 cm subapertures and incoherently averaged to reduce speckle noise. (c) SAL image with corner cube glint references for clean phase error measurement. Cross range resolution = 2.5 cm, 40× improvement over the spot size. Total synthetic aperture = 5.3 m, divided into 10 cm subapertures and incoherently averaged to reduce speckle noise. SOURCE: B. Krause, J. Buck, C. Ryan, D. Hwang, P. Kondratko, A. Malm, A. Gleason, and S. Ashby, 2011, “Synthetic aperture ladar flight demonstration,” in CLEO:2011 - Laser Applications to Photonic Applications, OSA Technical Digest (CD) (Optical Society of America, 2011), PDPB7.

OCR for page 107
EMERGING ELECTRO-OPTICAL TECHNOLOGIES 143 energy (energy reservoir) surrounding filaments can play an important role in their formation. 113 When particles in the propagation path like water droplets, snow or dust, block the filament, the energy in the reservoir will refill the filament core (replenishment). 114 Such filamentation properties clarify why filaments can be formed and propagate under adverse atmospheric conditions such as rain, compared to the linear propagation of the beam. The background can contain up to 90 percent of the pulse energy, which is beneficial for maintaining the filament formation. 115 Calculations of the spatial evolution of filaments are complicated by the high level of nonlinearities and provide a major challenge to numerical modeling. Filaments in the atmosphere, in common with high-intensity propagation of light in fibers, will generate SC emission, from the UV to the IR. The generation of the SC is assumed to be primarily the result of spectral broadening of the laser energy by self-phase modulation. Emission in the UV is enhanced via third-harmonic generation in the atmosphere, which mixes with the SC generated by self- phase-modulation of the fundamental. 116 Figure 3-20 shows the laboratory-measured spectra of the SC light for different levels of frequency chirp in the pulse as well as different pulsewidths. 117 The pulse chirp can be controlled to correct for atmospheric dispersion over a given path so the pulsewidth is minimized (and peak power maximized) at the desired location in the atmosphere. Subsequent measurements of the intensity of backscattered light from atmospheric filaments showed an enhancement in the amount of light beyond that expected by Rayleigh scattering, and this was proposed to be the result of longitudinal index variations in the filament, acting as a Bragg reflector to the generated SC. 118 Spectral measurements of the SC produced over a long vertical path in the atmosphere and reflected from a cloud at 4 km indicated a much higher level of energy in the 1,000-2,000-nm region than indicated by Figure 3-20, by about an order of magnitude, attributed to the much longer generation path than that in the laboratory. 119 The ability to generate high intensities at long distances and create UV-IR SC light at some distance above ground suggested applications of filaments to various lidar applications, and led to the construction and deployment of the Teramobile lidar system, 120 built in 2000-2001 as part of a French- German effort. The Ti:sapphire laser (supplied by Thales in France) has the specifications listed in Table 3-2, and the receiver employs a 40-cm-diameter telescope, along with a variety of detectors and a 50-cm spectrograph for spectral analysis. A plan drawing and photograph of the system appear in Figure 3-21. Figure 3-22 shows a nighttime photograph of the SC light generated by the Teramobile system, and Figure 3-23 provides both aerosol and DIAL data generated from the system, the latter showing H 2 O 113 M. Mlejnek, E.M. Wright, and J.V. Moloney, 1999, “Moving-focus versus self-waveguiding model for long- distance propagation of femtosecond pulses in air,” IEEE J. Quant. Electron. 35: 1771. 114 F. Courvoisier, V. Boutou, J. Kasparian, E. Salmon, G. Méjean, J. Yu, and J.-P. Wolf, 2003, “Ultra-intense light filaments transmitted through clouds,” Appl. Phys. Lett. 83: 213. 115 W. Liu, F. Théberge, E. Arévalo, J.F. Gravel, A. Becker, and S.L. Chin, 2005, “Experiment and simulations on the energy reservoir effect in femtosecond light filaments,” Opt. Lett. 30: 2602. 116 L. Bergé, S. Skupin, G. Méjean, J. Kasparian, J. Yu, S. Frey, E. Salmon, and J.P. Wolf, 2005, “Supercontinuum emission and enhanced self-guiding of infrared femtosecond filaments sustained by third- harmonic generation in air,” Phys. Rev. E71: 016602. 117 J. Kasparian, R. Sauerbrey, D. Mondelain, S. Niedermeier, J. Yu, J.-P. Wolf, Y.-B. André, M. Franco, B. Prade, S. Tzortzakis, A. Mysyrowicz, A.M. Rodriguez, H. Wille, and L. Wöste, 2000, “Infrared extension of the supercontinuum generated by femtosecond terawatt laser pulses propagating in the atmosphere,” Opt. Lett. 25: 1397. 118 J. Yu, D. Mondelain, G. Ange, R. Volk, S. Niedermeier, J.-P. Wolf, J. Kasparian, and R. Sauerbrey, 2001, “Backward supercontinuum emission from a filament generated by ultrashort laser pulses in air,” Opt. Lett. 26: 533. 119 G. Mejean, J. Kasparian, E. Salmon, J. Yu, J.-P. Wolf, R. Bourayou, R. Sauerbrey, M. Rodriguez, L. Woste, H. Lehmann, B. Stecklum, U. Laux, J. Eisloffel, A. Scholz, and A.P. Hatzes, 2003, “Towards a supercontinuum- based infrared lidar,” Appl. Phys. B77: 357. 120 H. Wille, M. Rodriguez, J. Kasparian, D. Mondelain, J. Yu, A. Mysyrowicz, R. Sauerbrey, J.-P. Wolf, and L. Woste, 2002, “Teramobile: A mobile femtosecond-terawatt laser and detection system,” Eur. Phys. J. AP 20: 183.

OCR for page 107
144 LASER RADAR FIGURE 3-20 “Measured spectrum of the supercontinuum generated in the center of the beam by 2-TW laser pulses. The results are shown for two different chirp settings that correspond to an initial pulse duration of 35 fs without chirp after the compressor (filled symbols) and a 55-fs initial pulse duration with negative chirp after the compressor (open symbols). Inset, spectrum of the SC generated in the center of the beam by 100 fs pulses as a function of pulse power value (200 and 100 mJ for 2 and 1 TW, respectively). The two curves have the same normalization factor (24).” SOURCE: J. Kasparian, R. Sauerbrey, D. Mondelain, S. Niedermeier, J. Yu, J.-P. Wolf, Y.-B. André, M. Franco, B. Prade, S. Tzortzakis, A. Mysyrowicz, A. M. Rodriguez, H. Wille, and L. Wöste, 2000, “Infrared extension of the supercontinuum generated by femtosecond terawatt laser pulses propagating in the atmosphere,” Opt. Lett. 25: 1397. TABLE 3-2 Specifications for Ti:sapphire Laser in the Teramobile Lidar System Center wavelength 793 nm Bandwidth 16 nm Pulse energy 350 mJ Pulse duration 70 fs (sech2) Peak power 5 TW Repetition rate 10 Hz Output beam diameter 50 mm Chirped pulse duration 70 fs to 2 ps, positive or negative chirp Energy stability 2.5 percent RMS over 400 shots Dimensions 3.5 × 2.2 m SOURCE: H. Wille, M. Rodriguez, J. Kasparian, D. Mondelain, J. Yu, A. Mysyrowicz, R. Sauerbrey, J.-P. Wolf, and L. Woste, 2002, “Teramobile: A mobile femtosecond-terawatt laser and detection system,” Eur. Phys. J. AP 20: 183. With kind permission of The European Physical Journal (PEG).

OCR for page 107
EMERGING ELECTRO-OPTICAL TECHNOLOGIES 145 FIGURE 3-21 Plan for Teramobile lidar system (left) and photograph of system (right), built in a standard ISO 20 ft sea container. SOURCE: Copyright Teramobile. Used with permission. FIGURE 3-22 Nighttime photograph of SC light generated by a vertically directed beam from the Teramobile system. SOURCE: Copyright Teramobile. Used with permission. vapor and O 2 absorption lines. 121 In principle, spectral analysis of the near-IR data can provide (through the intensity of the water absorption lines) a probe of humidity and (through analysis of the spectral distribution of O 2 lines around 760 nm) a probe of atmospheric temperature. Under some conditions, SC light from as high as 20 km has been observed. Other applications of the Teramobile system have been in aerosol characterization, through the simultaneous generation of backscatter from a wide variety of wavelengths. In particular, the broad SC spectrum allows probing clouds and determining not only size distributions of the aerosols, but also, 121 J. Kasparian, M. Rodriguez, G. Méjean, J. Yu, E. Salmon, H. Wille, R. Bourayou, S. Frey, Y.-B. Andre, A. Mysyrowicz, R. Sauerbrey, J.-P. Wolf, and L. Wöste, 2003, “White-light filaments for atmospheric analysis,” Science 301: 61.

OCR for page 107
146 LASER RADAR FIGURE 3-23 “(A) Schematic of the Teramobile lidar experimental setup. Before launch into the atmosphere, the pulse is given a chirp, which counteracts group velocity dispersion during its propagation in air. Hence, the pulse recombines temporally at a predetermined altitude, where a white light continuum is produced and then is backscattered and detected by the receiver. (B) Vertical SC aerosol/Rayleigh backscatter profile at three wavelengths: 270 nm (third harmonic), 300 nm, and 600 nm. (C) High-resolution atmospheric absorption spectrum from an altitude of 4.5 km, measured in a DIAL configuration.” SOURCE: From J. Kasparian, M. Rodriguez, G. Méjean, J. Yu, E. Salmon, H. Wille, R. Bourayou, S. Frey, Y-B. Andre, A. Mysyrowicz, R. Sauerbrey, J.-P. Wolf, and L. Wöste, 2003, “White-light filaments for atmospheric analysis,” Science 301: 61. Reprinted with permission from AAAS.

OCR for page 107
EMERGING ELECTRO-OPTICAL TECHNOLOGIES 147 FIGURE 3-24 “Remote detection and identification of bioaerosols with Teramobile system. The femtosecond laser illuminates a plume of riboflavin (RBF)-containing microparticles 45 m away (left). The backward-emitted two- photon-excited fluorescence, recorded as a function of distance and wavelength, exhibits the specific RBF fluorescence signature for the bioaerosols (middle) but not for pure water droplets” (simulating haze, right). 122 SOURCE: G. Mejean, J. Kasparian, J. Yu, S. Frey, E. Salmon, and J.-P. Wolf, 2004, “Remote detection and identification of biological aerosols using a femtosecond terawatt lidar system,” Appl. Phys. B78: 535. through spectral analysis of the H 2 O and O 2 absorption lines present in the returned spectra, cloud humidity and temperature. 123 Finally, studies with the Teramobile system attempted to simulate the detection of bio-active aerosols, through LIF detection of clouds of 1-µm water droplets containing either riboflavin or just pure water. The system excited riboflavin fluorescence in the blue-green region by two-photon processes, by virtue of the high peak power in the pulse. The experimental setup and data appear in Figure 3-24. 124 A major advantage of the technique lies in the better atmospheric propagation of the near-IR light, compared to the short-wavelength light needed to directly excite the fluorescence. Presumably, tryptophan or nicotinamide adenine dinucleotide (NADH) fluorescence in bioactive aerosols could also be excited via higher order excitation processes. Future work on femtosecond lidar systems may be able to employ newer, high-energy ultrafast sources using directly diode-pumped Yb-doped crystals, which would permit construction of less expensive, smaller, and more efficient sources. In summary, the development of continuum sources driven by femtosecond-pulse lasers (and in some cases by nanosecond-pulse sources) has provided significant improvement in the measurement accuracy of path-averaged DIAL sensors. Continuum sources that are the result of coherent generation processes—and are thus precision frequency combs—have provided greatly enhanced gas detection sensitivities over other active sensors through the dual-comb technique, and have allowed broad spectral scans to be taken in under 100 microseconds. Filament-based white-light generation has enabled a new class of range-resolved DIAL atmospheric measurements. Intense near-IR femtosecond sources that excite bioactive molecules through multiphoton processes have been demonstrated. Finally, femtosecond laser technology, for short-range applications at least, can employ fiber-laser-based sources and can find use in UAV-based sensors. 122 J. Kasparian and J.-P. Wolf, 2008, “Physics and applications of atmospheric nonlinear optics and filamentation,” Optics Express 16:1. 123 R. Bourayou, G. Mejean, J. Kasparian, M. Rodriguez, E. Salmon, J. Yu, H. Lehmann, B. Stecklum, U. Laux, J. Eisloffel, A. Scholz, A.P. Hatzes, R. Sauerbrey, L. Wöste, and J.-P. Wolf, 2005, “White-light filaments for multiparameter analysis of cloud microphysics,” J. Opt. Soc. Am. B22: 369. 124 G. Mejean, J. Kasparian, J. Yu, S. Frey, E. Salmon, and J.-P. Wolf, 2004, “Remote detection and identification of biological aerosols using a femtosecond terawatt lidar system,” Appl. Phys. B78: 535.

OCR for page 107
148 LASER RADAR Conclusion 3-11: A significant performance enhancement of both path-averaged and range- resolved differential absorption lidar can be facilitated with the use of femtosecond-pulse sources. Recommendation 3-2: The general application of femtosecond sources should be encouraged at the development level and monitored worldwide. Recommendation 3-3: Programs to deploy short-range sensors with fiber-laser-based femtosecond sources for use on unmanned aerial vehicles should be supported. ADVANCED QUANTUM APPROACHES In most of this report, the physical processes under consideration can be fully understood from a semiclassical physics perspective. That is, the optical field is treated classically (Maxwell’s equations, the wave equation, etc.), and the interactions between the optical field and materials (targets, intervening media, detectors, etc.) are understood in terms of a classical electromagnetic field and either a continuous medium or quantized material model. To be sure, “photons” are commonly referred to in these discussions, but this is merely a convenient, if somewhat sloppy, expedient. 125 Even in the very low light limit, “photon-counting,” shot noise limit, etc., the physics is that of a classical electromagnetic field interacting with matter. In this section, however, potential new sensing modalities that can be realized by exploiting the truly quantum nature of the optical field are considered. There are quantum states of light that produce physical behavior that cannot be understood in terms of a classical stochastic electromagnetic field (e.g., antibunched light, squeezed states, and entangled photons) and these can be exploited. When the term “quantum light” is used, the general reference is to optical fields exhibiting behavior—typically statistical behavior—not possible under the laws of classical physics. Many of the statistical properties of light that pose noise limitations for laser remote sensing are due to the stochastic nature of the electromagnetic field. For example, consider light (electromagnetic radiation of any frequency) from a thermal source (a lightbulb, or the Sun, say) falling onto a photodetector. Whether the detector is a linear mode device such as a p-i-n photodiode or a “photon counting” device such as a Geiger-mode avalanche photodiode, the resulting signal will fluctuate, and the statistics of these fluctuations are well understood. One aspect of the statistics of such a thermal source is “photon bunching”—that is, the photodetection events occur in “clumps.” These clumps are easy to understand by thinking of this optical field as a classical stochastic process. That is to say (taking the photon counting detection case as a concrete example), if a photodetection event has just occurred, chances are the field is fluctuating “high,” and another event is likely to follow immediately. If there has been a significant lapse since the last event, the field is probably fluctuating low, and more waiting time is likely. The photodetections “bunch.” Other sources exhibit fluctuations as well—for example, a laser well above threshold emits light exhibiting Poissonian photocount statistics. This is the same statistical behavior one would observe in the radiation from a classical current. In Poisson photostatistics the arrival times of photons are highly random, exhibiting no temporal correlation at all. However, not all sources exhibit fluctuations that can be explained as simply photon bunching or Poisson statistics. Light emitted from “resonance fluorescence” exhibits photon antibunching. 126 That is, the photodetection statistics are more regular than from a laser beam—like pearls on a string (see Figure 3-25). In a sense, then, antibunched light is more regular (or less noisy) than classical physics allows. Photon antibunching can be observed by sampling the light with a beamsplitter; it manifests as a minimum in the photodetection correlation function between the two output ports for zero delay time. 125 W.E. Lamb, 1995, “Anti-photon,” Appl. Phys. B 60: 77. 126 H.J. Kimble, M. Dagenais, and L. Mandel, 1977, “Photon antibunching in resonance fluorescence,” Phys. Rev. Lett. 39: 691.

OCR for page 107
EMERGING ELECTRO-OPTICAL TECHNOLOGIES 149 FIGURE 3-25 Photon detections as a function of time for (top) antibunched, (middle) random, and (bottom) bunched light. At high loss rates, the photon statistics approach that of Poissonian classical light and there is no advantage over using a conventional laser. SOURCE: By J.S. Lundeen at en.wikipedia [GPL (http://www.gnu.org/licenses/gpl.html)], from Wikimedia Commons. See http://upload.wikimedia.org/ wikipedia/commons/8/86/Photon_bunching.png. Antibunched light can also exhibit sub-Poissonian photostatistics—that is, the variance in the photon count in a given interval is less than the mean. This raises the possibility that such “ultra-regular” light might be used to improve the signal to noise ratio in a remote sensing system. Unfortunately, detailed, but straightforward, calculations show that this potential advantage does not survive the very high loss rates inherent in long-range active imaging. Another kind of non-classical light that has drawn considerable interest is squeezed light. 127 Quantum mechanics states that there are fundamental limits to the fluctuations of the electromagnetic field. These fluctuations can be expressed in various ways, and for the purposes of this discussion, the focus is on fluctuations in amplitude and phase of the field, as this is relatively intuitive. The product of standard deviation of the number of photons and the standard deviation of the phase of the field is limited by a Heisenberg uncertainty principle of the form 1 σ n ⋅σ ϕ ≥ 2 For a coherent state, a good approximation to the output of a laser far above threshold, 𝜎 𝑛 is photons, 128 and 𝜎 𝜑 is proportional to the standard deviation of the electric field strength divided by the proportional to the standard deviation of the electric field strength times the square root of the number of square root of the number of photons; this limit is the so-called “standard quantum limit” to measurements of amplitude and phase. The essence of squeezed light is to reduce the variance in one parameter at the expense of the other. Reducing the photon number variance is possible at the expense of increasing the phase uncertainty and vice versa (see Figure 3-26). Squeezed light was first demonstrated in the late 1980s and has been used to demonstrate phase measurement below the shot noise limit, 129 absorption spectroscopy below the vacuum state limit, 130 and a variety of other measurements beyond the standard quantum limit. Sources of squeezed light with squeezing 10 dB below the shot noise limit have been demonstrated and are finding application in interferometry applications such as gravitational wave detection. 131,132 Another application of squeezed light that attracted significant attention in the 1980s was 127 M.C. Teich and B.E.A. Saleh, 1989, “Tutorial: squeezed states of light,” Quantum Opt. l: 152. 128 Recall that the number of photons is proportional to the square of the electric field. 129 M. Xiao, L.-A. Wu, and H.J. Kimble, 1987, “Precision measurement beyond the shot-noise limit,” Phys. Rev. Lett. 59: 278. 130 E.S. Polzik, J. Carri, and H.J. Kimble, 1992, “Spectroscopy with squeezed light,” Phys. Rev. Lett. 68: 3020. 131 H. Vahlbruch, M. Mehmet, S. Chelkowski, B. Hage, A. Franzen, N. Lastzka, S. Goßler, K. Danzmann, and R. Schnabel, 2008, “Observation of squeezed light with 10-db quantum-noise reduction,” Phys. Rev. Lett. 100: 033602.

OCR for page 107
150 LASER RADAR 1/2 σφ 1/2 ∆n/2n ∆φ |E|~n φ (a) (b) (c) FIGURE 3-26 Squeezed states of light in polar representation. The length of the vector indicates the strength of the electric field (square root of the photon number), and the angle represents the phase of the field. (a) Typical representation of a coherent state where the shaded region represents the standard deviation of the photon number and phase. (b) “Phase-squeezed” field where the uncertainty in the phase is reduced at the expanse of larger uncertainty in the photon number. (c) “Number-squeezed” state where the uncertainty in the photon number is reduced at the expanse of larger uncertainty in the optical phase. optical waveguide taps with infinitesimal insertion loss (enabling undetectable fiberoptic taps). 133,134 A natural question is whether these laboratory demonstrations are extensible to real-world remote-sensing applications. For example, might spectroscopy with squeezed light improve the sensitivity of remote DIAL systems, or might a phase squeezed local oscillator be used to improve the sensitivity of coherent lidar? Unfortunately, when examined in detail, these schemes tend not to realize the initial promise for remote sensing. For coherent lidar, Rubin and Kaushik examined the problem in detail and concluded that the signal-to-noise ratio for heterodyne laser radar with a coherent target-return beam and a squeezed LO beam is lower than that obtained using a coherent LO, regardless of the method employed to combine the beams at the detector. 135,136 One of the most intriguing aspects of quantum physics is quantum entanglement. Entanglement is more than just classical correlation; rather, it is a degree of correlation and predictability that exceeds that possible in classical physics. A detailed discussion of entanglement is beyond the scope of this report, and the reader is referred to the extensive literature.137 In the past two decades, the resources of entanglement have been exploited in various ways in the burgeoning field of quantum information—quantum communication, quantum cryptography, quantum computation and quantum teleportation. Although these fascinating, and potentially groundbreaking, developments are beyond the scope of this study, quantum entanglement has also been proposed as a resource to enable new capabilities in remote sensing. These proposals have not engendered practical gains, but the field bears watching for the development of disruptive capabilities. 132 H. Vahlbruch, A. Khalaidovski, N. Lastzka, C. Gräf, K. Danzmann, and R. Schnabel, 2010, “The GEO600 squeezed light source,” Class. Quantum Grav. 27: 084027. 133 R. Bruckmeier, H. Hansen, S. Schiller, and J. Mlynek, 1997, “Realization of a paradigm for quantum measurements: The squeezed light beam splitter,” Phys. Rev. Lett. 79, 43. 134 J.H. Shapiro, 1980, “Optical waveguide tap with infinitesimal insertion loss,” Opt. Lett. 5: 351. 135 M.A. Rubin and S. Kaushik, “Squeezing the local oscillator does not improve signal-to-noise ratio in heterodyne laser radar,” 2007, Optics Lett. 32(11): 1369. 136 M.A. Rubin and S. Kaushik, 2009, “Signal-to-noise ratio in squeezed-light laser radar,” App. Opt. 48(23): 4597. 137 A. Peres, 1993, Quantum Theory: Concepts and Methods, Kluwer Academic Publishers.

OCR for page 107
EMERGING ELECTRO-OPTICAL TECHNOLOGIES 151 One application area for entangled photon states that has been touted 138 as offering a new capability is quantum superresolution. The idea is that in many respects, a maximally entangled state of N photons can behave as a single photon of wavelength λ/N. Experiments have demonstrated this behavior in several scenarios—namely, interferometry with fringe spacing λ/2N and near-field optics. (𝑅 ≫ 𝐷 2 /𝜆 ) imaging performance exceeding conventional diffraction limits with entangled photon Unfortunately, despite the promise of λ/N quantum superresolution, nobody has yet demonstrated far field states. (Notionally, this would mean far field spatial resolution of λR/ND, where D represents either the transmitter diameter in a flying spot lidar or the receive aperture in a focal plane lidar.) Another application of entangled photon states is measurement below the shot noise limit, reaching the Heisenberg limit. The standard quantum limit for phase measurement of an optical field scales as the inverse of the square root of the number of photons in the field, N: δϕ ~ 1/ N . Using entangled states of N photons it is theoretically possible to reach the Heisenberg limit to phase measurement which scales as the inverse of the number of photons: δϕ ~ 1/ N . Thus, for a field of 100 photons, one could improve on the performance of a measurement of optical phase by a factor of 10. Again, however, as with sub-Poissonian light, optical losses in a remote sensing system limit the effectiveness of this method. It has been shown 139 that once losses exceed the modest level of about 6.7 dB, the phase measurement actually degrades with increasing N. One concept employing entangled photons that does not appear to be subject to the same deleterious effects of loss is quantum illumination. 140 In this technique, a series of single photon signal pulses is directed at a target. According to Llyod “each signal sent out is entangled with an ancilla, which is retained…Detection takes place via an entangling measurement on the returning signal together with the ancilla.” 141 Quantum illumination with n bits of entanglement increases the effective signal-to-noise ratio of detection and imaging by a factor of 2n, an exponential improvement over unentangled illumination. The entangled detection serves as a sort of filter, improving the SNR performance by identifying the received photons as the same ones that were transmitted. What is remarkable is that this performance enhancement is retained, even in a long-range remote-sensing application where noise and loss completely destroy the entanglement between signal and ancilla. One application area where such a capability could have a significant impact is missile defense ladar, where one is trying to determine the presence or absence of a target at very long range. Imaging applications with resolution enhancements have also been discussed in the literature, based on similar arguments as the quantum super-resolution discussed above, but it must be made clear that this again applies to near-field geometries. The key barrier to the realization of quantum illumination is the entangling measurement for multibit entanglement. Although possible by the laws of physics, implementations of this type of photodetection are not readily available. 138 “Quantum Lidar - Remote Sensing at the Ultimate Limit,” 2009 AFRL-RI-RS-TR-2009-180 Final Technical Report, July. 139 M.A. Rubin and S. Kaushik, 2007, “Loss-induced limits to phase measurement precision with maximally entangled states,” Phys Rev. A 75: 053805. 140 S. Lloyd, 2008, “Enhanced sensitivity of photodetection via quantum illumination,” Science 321(5895): 1463. 141 S. Lloyd, 2008, “Enhanced Sensitivity of Photodetection via Quantum Illumination” Science 321 (5895): 1463-1465.

OCR for page 107
152 LASER RADAR Another intriguing area that has attracted significant attention in the past few years is “ghost imaging.” 142,143 In ghost imaging, one exploits the correlations between two light beams in an active imaging scenario. One beam illuminates and is scattered off a target, with the scattered light being collected by a nonresolving “bucket” detector. The second, correlated beam illuminates a spatially resolving detector. The image is formed by cross-correlations of the two photodetection signals. Thus, neither beam produces a target image by itself—the beam interacting with the target provides no spatial resolution, and the beam falling on the detector array has not interacted with the target. The correlations between the two beams can be either classical or quantum in nature, and ghost imaging can use either direct or phase-sensitive coherent detection. The details in ultimate system performance (spatial resolution, field of view, image contrast, and SNR) do depend on the nature of the correlations and the detection technique. The potential benefits of ghost imaging in practical applications are principally in the expanded design trade space afforded by the technique, but it remains to be seen what role such imaging will have to play in practical systems. Nonquantum Advanced Techniques Although not strictly advanced quantum techniques, there are other nonconventional advanced concepts that should be addressed here as well. Metamaterials (engineered materials whose properties are determined by their physical rather than molecular structure) have been used to achieve electromagnetic properties that, while allowed by physics, are generally not present in nature. These include control of the dielectric permittivity and magnetic susceptibility of materials to create negative refractive index (NRI) materials, 144 which have been shown to have unusual optical properties, enabling “perfect” near-field imaging resolution far beyond the diffraction limit of the wavelength, 145 as well as “cloaking devices” that render the metamaterial invisible. There are significant technical obstacles to overcome for these materials before applications become realizable, such as the deleterious effects of absorption, but the field should be monitored closely for breakthroughs in the technical barriers. More fundamental, however, is to understand the validity of the actual applications. There have been claims, 146,147 for example, that the perfect lensing capability of NRI materials enables high-resolution imaging at long range beyond the diffraction-limited (λ/D) resolution of the optics. This is based on a misconception 148 about the capabilities of NRI telescopes and should not be considered a viable approach. However, there may be other very beneficial applications of NRI and “perfect lenses” in general, and the field should be closely monitored and supported. In summary, the academic community has conceived and in some cases demonstrated numerous intriguing concepts in quantum imaging that exploit the unique physics of the quantized electromagnetic field. Several of these concepts at first examination appear to offer profound advantages for active remote-sensing systems. Despite the potential promise of exploiting the quantum nature of light, however, most of these concepts can be shown not to provide a real advantage for remote-sensing systems. 142 Y. Shih, 2009, “The physics of ghost imaging,” arXiv:0805.1166v5 [quant-ph], Sep. 29, 2009. 143 B.I. Erkmen and J.H. Shapiro, 2010, “Ghost imaging: from quantum to classical to computational,” Advances in Optics and Photonics 2: 405. 144 S.A. Ramakrishna and T.M. Grzegorczyk, 2008, Physics and Applications of Negative Refractive Index Materials, CRC Press. 145 J.B. Pendry, 2000, “Negative refraction makes a perfect lens,” Phys. Rev. Lett. 85: 3966. 146 J. May and A. Jennetti, 2006, “Telescope resolution using negative refractive index materials,” Proceedings of SPIE 5166: 220, UV/Optical/IR Space Telescopes: Innovative Technologies and Concepts. 147 J. May and S.D. Stearns, 2011, “Imaging system using a negative index of refraction lens,” US Patent 8017894. 148 S. Stanton, B. Corrado, and T. Grycewicz, 2006, Comments on Negative Refractive Index Materials and Claims of Super-Imaging for Remote Sensing, Aerospace Report No. TOR-2006(3907)-4650.

OCR for page 107
EMERGING ELECTRO-OPTICAL TECHNOLOGIES 153 Conclusion 3-12: Advanced quantum approaches, including nonclassical photon statistics, squeezed light, and entangled photons, while intriguing and potentially promising, are not currently of added utility for practical remote-sensing systems. Nevertheless, it is important to pursue and monitor this family of approaches, since new concepts with breakthrough capabilities may emerge. GENERAL CONCLUSIONS—EMERGING SYSTEMS The following general conclusions regarding emerging active EO systems are derived from discussions in this chapter taken as a whole. Conclusion 3-13: Emerging active EO systems show strong advantages (signal to noise ratio gain, phase compensation and thinner, lighter apertures) at the cost of increased system complexities (computational processing costs, narrow linewidth lasers, etc.). Conclusion 3-14: Emerging active EO technologies can complement current conventional ladar systems. Conclusion 3-15: High-level, active EO emerging technologies will most likely be pursued through funding at university, government, or industry laboratories, with indicators given by publications and presentations. Conclusion 3-16: Coherent active EO systems will continue to develop for applications that require access to the optical field (not just intensity). Conclusion 3-17: Large potential markets may propel progress for emerging active EO technologies for commercial applications.