Skip to main content

Currently Skimming:

2. Image Analysis and Computer Vision
Pages 9-36

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 9...
... rely heavily on stored knowledge and symbolic reasoning. More specifically, Tow-level vision includes such problems as coding and compressing ciata for storage and transmission; synthesizing natural and man-made patterns; restoring images degraded by blur, noise, digitization, anal other sensor effects; reconstructing images from sparse data or indirect measurements (e.g., computed tomography)
From page 10...
... This may be due in part to the lack of raw processing power or suitably parallel computation, but also, and perhaps more important, to the inability of synthetic systems to integrate sources of information and place appropriate global constraints. At the moment, automated visual systems rarely make "interpretation-guided" or "knowledge-driven" decisions, due probably to a lack of sufficiently invariant representations en c!
From page 11...
... In fact, some of the earliest and most publicized successes of computer vision occurred during the 1960s and 1970s when images received from orbiting satellites and space probes were substantially improved with linear signal processing techniques such as the Wiener filter. More recently, significant advances have been made in the classification of satellite data for weather and crop yield prediction, geologic mapping, and pollution assessment, to name but three other areas of remote-sensing.
From page 12...
... Section 2.2 contains a brief review of digital images, and §2.3 describes four specific image analysis tasks. 2.2 Digital Images The data available to an automated derision system are one or more images acquired by one or more sensors.
From page 13...
... More specifically, the true pattern flub corresponds to the distribution of energy flux (radiance) emitted by objects, either because they are "illuminated" by an energy source, or because they are a primary source of energy themselves; it is often referred to as "scene intensity" or "brightness." The measured values 9 correspond to the energy flux (or irradiance)
From page 14...
... three-dimensional shape reconstruction. These problems demonstrate the mathematical difficulties encountered in converting information which is implicit in the recorded digital image to explicit properties and descriptions of the physical world.
From page 15...
... is still imposed. To see this, consider the simple case of a linear model: 9 = Iff + 9, with only blur and a single noise process.
From page 16...
... , object recognition, and full-scale scene interpretation. In the view of some researchers, including the authors, this modular approach to image analysis is highly suspect, and generic segmentation is overemphasized.
From page 17...
... , the isotope emits a positron, which, upon colliding with a nearby electron, produces two photons propagating in opposite directions. From here on the focus is on single photon emission computed tomography (SPECT)
From page 18...
... is that of maximum likelihood (ML) , i.e., maximize P(y~x)
From page 19...
... Again, the paradigm consists of distinct steps, and the extraction of geometric features is itself precedecl by "pre-processing," which encompasses noise removal and other aspects of restoration, edge and boundary detection, and perhaps segmentation. The main sources of information ~ "cues" ~ for three-dimensional shape reconstruction are the intensity changes themselves ("shape-from-shading")
From page 20...
... , u = (u~,u2) ~ R2, from an observed image irradiance function glut on the image plane u ~ R2.
From page 21...
... The statistical variability of such regularities suggests a Bayesian formulation in which a priori knowledge and expectations are represented by a prior distribution. Spatial processes in general, and Markov random fields (MRF)
From page 22...
... . In this way, the measure or favors configurations in which nearby pixels have similar gray levels.
From page 23...
... In the restoration problem, the degradation model is inclucec! by (2.2~; in boundary detection and segmentation, it is a projection; in tomography, it is given by (2.4~; in shape-from-shading, it is incluced by (2.5)
From page 24...
... The problem of parameter estimation has given rise to interesting mathematical questions, and to an interplay between statistical inference and the phenomena of phase transitions [7~. Attribute Estimation The ultimate goal, of course, is to choose a particular estimate, x = arty)
From page 25...
... . 2.4.2 Examples Image Restoration The basic degradation model is given by (2.2~.
From page 26...
... The process XE is neither part of the data nor the target of estimation; rather, it is an auxiliary process designed to bring exogenous information into the model, en cl it is coupled to XP in such a manner that in the likely states of the joint probability distribution of X = (XP,XE) , the intensity function is TocaDy smooth with possibly sharp transitions, and the locations of the edges satisfy our a priori expectations about the behavior of boundaries.
From page 27...
... The energy U2(xE) reflects our prior expectations about boundaries: most pixels are not at boundary regions; boundaries are usually persistent (no isolated or abandoned segments)
From page 28...
... for detecting textural boundaries. The method also applies to the problem of locating boundaries representing sudden changes in depth and surface orientation.
From page 29...
... Single Photon Emission Tomography The digitized isotope intensity (see §2.3.3) is thought to be a realization of a spatial process X = {Xi: i ~ S)
From page 30...
... intensity process XP = {XiP: i ~ S) , the shape process N = Toni: i ~ S)
From page 31...
... The basic idea is to use Gibbs distributions to articulate general properties of shapes: surfaces are locally smooth, orientation may exhibit jumps because of changes in depth or presence of surface discontinuities. As in restoration, the process N is coupled to an edge process XE = (XE: t ~ sE)
From page 32...
... reconstructed egg illuminated from y-direction. Deformable Templates In this subsection we briefly describe a powerful and elegant methodology introduced by Ulf Grenander for pattern synthesis and analysis of biological shapes.
From page 33...
... . For special cases, this conditioning is straightforward; in general, it involves subtle limit arguments.
From page 34...
... t5] Besag, J., Towards Bayesian image analysis, J
From page 35...
... Mertus, Comprehensive statistical model for single photon emission computed tomography, preprint, Brown University, 1990.
From page 36...
... A., and Y Vardi, Maximum likelihood reconstruction in positron emission tomography, IEEE Trans Medical Imaging 1 (1982)


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.