such as PET (positron imaging tomography) is combined with an anatomical modality such as CT (computed tomography); normally the images are either superimposed or read together by the radiologist, but it is also possible to use the information from one of the modalities to control the data acquisition in the other modality (Clarkson et al., 2008; referenced under Reading).
The goal of either adaptive optics or the more general adaptive imaging is to improve the quality of the resulting images. Most often the quality has been assessed either in terms of image sharpness or subjective visual impressions, but it is also possible to define image quality rigorously in terms of the scientific of medical information desired from the images, which is often referred to as the task of the imaging system. Typical tasks in medicine include detecting a tumor and estimating its change in size as a result of therapy. In astronomy, the task might be to distinguish a single star from a double star or to detect an exoplanet around a star. The quality of an imaging system, acquisition procedure, or image-processing method is then defined in terms of the performance of some observer on the chosen task, averaged over the images of many different subjects.
The methodology of task-based assessment of image quality is well established in conventional, non-adaptive imaging (although some computational and modeling aspects will be explored under IDR Team Challenge 2), but very little has been done to date on applying the methodology to adaptive systems. Barrett et al. (2006) discusses task-based assessment in adaptive optics, and Barrett et al. (2008) treats the difficult question of how one even defines image quality, normally a statistical average over many subjects, in such a way that it can be optimized for a single subject. Much more research is needed on image quality assessment for all forms of adaptive imaging.
What imaging problems are most in need of autonomous adaptation? What information, either from the images or from auxiliary sensors, is most likely to be useful for guiding the adaptation in each problem?
For each of the problems considered under the first question, what are the possible modes of adaptation? That is, what system parameters can be altered in response to initial or ongoing image information?
Again for each of the problems, how much time is available to analyze the data and implement the adaptation? What new algorithms and computational hardware might be needed?