IDR Team Summary 4
Develop a telescope or starshade that would allow planetary systems around neighboring stars to be imaged.
CHALLENGE SUMMARY
The world in which we live is the only planet we know that harbors life. Is our planet unique? We have not yet found life on Mars, despite ample evidence of the existence of water, nor have we found evidence of life anywhere else in our solar system. A tantalizing possibility is that life may yet exist under the ice of the moons of Jupiter—yet there is no proof. Is there life elsewhere in the universe? If we were able to image planetary systems around neighboring stars, and in addition, characterize the surfaces and atmospheres of constituent planets, we would be one step closer to answering this question.
To date, more than 400 planets have been detected around other stars through a combination of radial-velocity techniques, transit experiments, and microlensing. Low-resolution spectra of a number of planets have also been found using the Hubble Space Telescope, the Spitzer Space Telescope, and a few ground-based observatories; in these cases, the planets have been objects unlike anything in our solar system, being mostly Jupiter-like planets in Mercury-like orbits. Images of several planetary systems have also been collected from the ground and space; these have shown planets in orbits much wider than even the bounds of our solar system and with planetary companions of extreme size, 3–20 times Jupiter’s mass.
Planetary systems like our own around other stars are too small to be imaged by conventional telescopes. If we wanted to search around the nearest 150 stars, we would need a telescope with an angular resolution better
than ~20 mas; this would allow us to distinguish objects such as Earth and Venus in solar system analogues at a distance of 15 pc from Earth. Our turbulent atmosphere limits ground-based telescopes to resolutions no better than 50 mas—even with the best available adaptive optics. Furthermore, the Hubble Space Telescope, with its 2.4 m mirror also has a resolving power no better than 50 mas. New advanced space telescopes are needed to image planetary systems similar to our own.
Beyond angular resolution limitations, a more difficult challenge is that planets are extremely faint as compared to the stars around which they orbit. An Earth-like planet would be about 10 billion times fainter than a Sun-like star when viewed at optical wavelengths, albeit somewhat brighter at infrared wavelengths—then only a factor of 10 million fainter. Because of this, scattered starlight within a telescope, caused by what would otherwise be negligible imperfections in mirror surfaces, can completely overwhelm the light from a planet. Telescopes must be significantly oversized compared to the required diffraction limited resolution so that planets could be seen beyond the glare of scattered starlight. Space telescopes with diameters of 8 m or more are needed to look for terrestrial planets around just the nearest dozen or so stars.
Building an 8-m optical space telescope is a formidable technical and engineering challenge. The largest telescopes on Earth are only slightly larger; namely the twin 10-m telescopes of the W. M. Keck Observatory. The largest telescope that can fit easily inside a launch vehicle is much smaller: only about 3.5 m in diameter. Innovative approaches to telescope design and packaging are therefore needed. In addition the telescope must have optics capable of suppressing starlight by a factor of 10 million to 10 billion—which is yet beyond the state of the art. Although this approach is certainly feasible with sufficient investment, it would provide images of only a handful of nearby planetary systems. Other innovative approaches have also been under study.
A potentially simpler approach might be to use a starshade to block starlight even before it enters the telescope, and have it an appropriate size and distance so that planet light could yet be seen. A starshade would need to be several 10’s of meters in diameter and situated at several 10,000 km away from the telescope. This approach may greatly relax the engineering requirements on the telescope itself, but at the same time introduces other logistical challenges. It also would not significantly increase the number of planetary systems that could be imaged.
The limitations in angular resolution of a single telescope can be overcome if multiple telescopes are used simultaneously as an interferometer in a synthesis array. This provides an increase in resolution proportional to the telescope-telescope separation, not simply the telescope diameter. Since the late 1950s, radio astronomers have used arrays of radio telescopes for synthesis imaging, realizing that it would never be possible to build steerable telescopes larger than about 100 m (such as the National Radio Astronomy Observatory’s Green Bank Telescope in West Virginia), nor fixed telescopes larger than ~300 m (the extreme example being Cornell’s Arecibo telescope in Puerto Rico). Combining signals from separated telescopes is relatively straightforward at radio and millimeter wavelengths, because radio receivers with adequate phase stability and phase references are readily available. At optical and infrared wavelengths the problem is significantly more difficult, because of the increased stability requirements at these shorter wavelengths. Nonetheless, this approach seems to be a promising long-term path to imaging other planetary systems and finding life on other worlds.
An optical or infrared telescope array in space is also a formidable technical and engineering problem. Nonetheless, the required starlight suppression of a factor of 10 million (in the infrared) has been demonstrated in the lab. Telescope separations of up to 400 m are needed to survey the nearest 150 or so stars. The largest ground-based arrays, such as Georgia State University’s Center for High Angular Resolution (CHARA) Array on Mount Wilson, California, have telescope separations of up to 300 m. However, atmospheric turbulence limits their sensitivity to objects brighter than 10–14th magnitudes. A space telescope array, above the atmosphere, would have a sensitivity limited primarily by the collecting area of each telescope, but there would be no single platform large enough on which to mount it. The telescopes would need to be operated cooperatively as a formation-flying array: this was for many years the baseline design of NASA’s Terrestrial Planet Finder (TPF) mission. Although experiments in space have demonstrated rendezvous and docking of separate spacecraft, no synthesis array has yet been flown. There is no precedent for a mission like TPF.
Key Questions
-
What innovative new ways and approaches might there be from other disciplines that could reduce the cost and increase the science of a planet-imaging mission?
-
How might NASA’s Human Spaceflight Program be used to build new observatories in space?
-
How should NASA best invest in technology to enable future planet-imaging missions?
Reading
Fridlund M, et al. The astrobiology habitability primer. Astrobiology 2010;10:1-4. Abstract accessed online June 15, 2010.
Hand E. Telescope arrays give fine view of stars. Nature 2010;464:820-1. Accessed online June 15, 2010.
Oppenheimer BR and Hinkley S. High-contrast observations in optical and infrared astronomy. Ann Rev Astron Astrophys 2009;47:253-89. Abstract accessed online June 15, 2010.
Schneider J. Extrasolar Planets Encyclopaedia: Interactive Extra-solar Planets Catalog, with all planets detected by imaging, with references. Accessed online June 15, 2010.
IDR TEAM MEMBERS
-
Supriya Chakrabarti, Boston University
-
Richard A. Frazin, University of Michigan
-
Jennifer D. T. Kruschwitz, JK Consulting
-
Tod R. Lauer, National Optical Astronomy Observatory
-
Peter R. Lawson, California Institute of Technology
-
Timothy P. McClanahan, NASA Goddard Space Flight Center
-
Richard G. Paxman, General Dynamics-Advanced Information Systems
-
Lisa A. Poyneer, Lawrence Livermore National Laboratory
-
George R. Hale, Texas A&M
IDR TEAM SUMMARY
George R. Hale, NAKFI Science Writing Scholar, Texas A&M
For years people have looked at the sky and wondered if there were other Earths out there. It wasn’t until about 15 years ago that we knew that other stars in the cosmos had planetary companions. Today we know about the presence of hundreds of extrasolar planets, but we’ve only actually seen a handful of them.
The vast majority of extrasolar planets have been discovered using indirect methods such as the radial velocity method or seeing transits of the planet in front of its star. The reason that only a few planets have been directly imaged is because they are incredibly difficult to see. One well-worn analogy is that finding a planet orbiting a distant star is like looking for a firefly next to a searchlight in New York while you’re in San Francisco.
This analogy highlights two of the major difficulties faced when imaging such planets. First, they are very far away and therefore hard to resolve without a very powerful telescope, and second, a star’s glare “drowns out” the planet’s light. Blocking out that glare can be done with a coronagraph or with a more complicated device called a starshade.
Can We Get a Different Question?
Originally, IDR team 4’s challenge was to develop a telescope or starshade that would allow planetary systems around neighboring stars to be imaged. Significant work in this areas is either already under way or planned for the near future. Hardware has been considered but isn’t in line with the imaging science–related theme of the conference. As a result, team 4 decided to reframe the challenge of imaging extrasolar planets in a more topical way in the hopes that they might make more progress instead of rehashing existing information. The new question is: “How do we apply imaging science to detect and characterize exoplanets?”
Why Are We Interested?
Over the next few years, several new instruments like Kepler, SPHERE, the Gemini Planet Imager (GPI), and PICTURE will come online and begin to collect vast amounts of data. This coming flood of information means we need better imaging methods now, not in 5 to 10 years.
That the hundreds of planets we have found thus far have been nothing like what we expected to find is another motivation for improving imaging. Because of limitations with the radial velocity method, the planets we have found are huge (larger than Jupiter) and have close orbits (closer than Mercury is to the sun). The ones we really want to find—those that my be habitable—are going to be smaller and farther away.
Problems to Overcome
Direct imaging doesn’t have the same limitations that the radial velocity method faces, which means it should be possible to find the planets astronomers want to find. But it does have its own challenges such as brightness differences, atmospheric turbulence, and image noise.
First and foremost, stars and the planets that orbit them have vastly different brightness. For example, the planets in HR 8799—one of the handful of directly imaged systems—are several thousand times dimmer than their parent star. Finding these planets was hard, but imaging a smaller and farther out planet would be even harder. Seeing a planet the size of Earth would require a reduction by a factor of 109. In other words, it would be a billion times dimmer than the star it orbits.
Atmospheric turbulence has been and continues to be a headache for astronomers worldwide. A space telescope is one way to get around turbulence, but such instruments are very expensive and pose their own engineering challenges. Another method is through the use of adaptive optics (AO). AO systems use deformable mirrors and computer software to attempt to compensate for turbulence, but they aren’t perfect. Even the best AO systems leave some noise in the image, which takes the form of streaks or “sparkles” in the background. This noise makes finding something small and faint like a planet exceptionally difficult, so it is desirable to remove or cancel out the interference.
Detection and Categorization
What’s needed is a way to find out what’s limiting imaging performance. To begin understanding how to do this we have to consider the data. Image data from telescopes comes in the form of data cubes. These cubes have spatial data (x and y coordinates), and data about light wavelength and polarization, and a series of cubes taken at different time intervals can give researchers temporal data. Wavelength and polarization data are of particular interest because how light is polarized can tell about the presence and composition of dust around a star, and the light’s spectrum can give clues about the presence of substances like water, methane, and chlorophyll—signs of life.
But before we can look for signs of life we have to first find the planet. Understanding how the noise in the image behaves over time, when the image moves, and how different spectra of light behave is important. To
understand what the sparkly background looks like, picture an elongated, ice cream cone–shaped streak that smears from red to blue. A planet in this image would be a single point and would only show one spectrum. As for motion, when a telescope fixes its gaze on a star, the noise stays in place while planets move.
Inquiring Minds Want to Know
How much better can we do? That’s the kernel of IDR team 4’s question. What are the different modeling approaches, and what is the best statistical framework? How do we choose which is best? A method that takes two hours to process one data cube is of little benefit. Image processing needs to be done in real time, and improvements have to be weighed against their costs in time and money.
How much better should scientists try to do is a related question. Improvements in exoplanet images can be thought of as running on a continuum, with doing nothing to the image at all on one end and having an ideal linear observer on the other. Currently, researchers are using methods like angular differential imaging (ADI), spectral differential imaging (SDI), and locally combined combination of images (LOCI) to eliminate noise. These techniques are an improvement but are only ad hoc methods.
What we must know is the source of fundamental error. Better understanding the source of error is the first step in developing an algorithm to correct for it. Such an algorithm would also need to be adaptive, changing when an AO system does something unexpected. One source of such information could come from AO systems themselves. AO systems produce corrected images, but they also produce a constant stream about the atmosphere and systematic data, that is, what the AO system is doing. The question is: Can we exploit this? This auxiliary data could hold the key to developing a better image processing algorithm.
Lastly, it’s worth considering how improved imaging could inform hardware decisions for future instruments. For instance, GPI will go online soon. Had better image correction algorithms been developed five years ago, would the instrument look any different? Will improved algorithms lead to better images and better hardware?