Page 105

Chapter 6
Positron Emission Tomography

6.1   Introduction

6.1.1 History

The history of positron emission tomography (PET) can be traced to the early 1950s, when the medical imaging possibilities of a particular class of radioactive substances were first realized. It was recognized then that the high-energy photons produced by the annihilation of the positron-emitting isotopes could be used to describe, in three dimensions, the physiological distribution of "tagged" chemical compounds. After 2 decades of moderate technological developments by a few research centers, widespread interest and broadly based clinical research activity began in earnest following the development of sophisticated reconstruction algorithms and improvements in detector technology. By the mid-1980s, PET had become a tool for medical diagnosis and for dynamic studies of human metabolism. Table 6.1 indicates the significant advances in PET technology over the last 20 years.

Table 6.1 Improvements in PET Technology Since 1975

Parameter

Typical Values, 1975

Typical Values, 1995

Spatial resolution
Number of detectors
Data per study

14 mm
64
4 kilobytes

4 mm
19,000
4 megabytes



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 105
Page 105 Chapter 6 Positron Emission Tomography 6.1   Introduction 6.1.1 History The history of positron emission tomography (PET) can be traced to the early 1950s, when the medical imaging possibilities of a particular class of radioactive substances were first realized. It was recognized then that the high-energy photons produced by the annihilation of the positron-emitting isotopes could be used to describe, in three dimensions, the physiological distribution of "tagged" chemical compounds. After 2 decades of moderate technological developments by a few research centers, widespread interest and broadly based clinical research activity began in earnest following the development of sophisticated reconstruction algorithms and improvements in detector technology. By the mid-1980s, PET had become a tool for medical diagnosis and for dynamic studies of human metabolism. Table 6.1 indicates the significant advances in PET technology over the last 20 years. Table 6.1 Improvements in PET Technology Since 1975 Parameter Typical Values, 1975 Typical Values, 1995 Spatial resolution Number of detectors Data per study 14 mm 64 4 kilobytes 4 mm 19,000 4 megabytes

OCR for page 105
Page 106 6.1.2 Applications Research with PET has added immeasurably to our current understanding of flow, oxygen utilization, and the metabolic changes that accompany disease and that change during brain stimulation and cognitive activation. Clinical uses include studies of Alzheimer's disease, Parkinson's disease, epilepsy, and coronary artery disease affecting heart muscle metabolism and flow. Even more promising with regard to widespread clinical utilization of PET are recent developments showing that PET can be used effectively to locate tumors and metastatic disease in the brain, breast, lung, lower gastrointestinal tract, and other sites. Early evidence also indicates that quantitative studies of tumor metabolism with PET can be used for non-invasive staging of the disease. Compared to other cross-sectional imaging techniques like MRI and CT, PET is distinguished by its immense sensitivity-its ability to quantitatively determine and display tracer concentrations in the nanomolar range. 6.1.3 Principle of Operation PET imaging begins with the injection of a metabolically active tracera biological molecule that carries with it a positron-emitting isotope (for example, 11C, 13N, 15O, or 18F). Over a few minutes, the isotope accumulates in an area of the body for which the molecule has an affinity. For example, glucose labeled with 11C, or a glucose analog labeled with 18F, accumulates in the brain or in tumors, where glucose is used as the primary source of energy. The radioactive nuclei then decay by positron emission. The ejected positron combines with an electron almost instantaneously, and these two particles undergo the process of annihilation. The energy associated with the masses of the positron and electron is divided equally between two photons that fly away from one another at a 180ºangle. Each photon has an energy of 511 keV. These high-energy g-rays emerge from the body in opposite directions, to be detected by an array of detectors that surround the patient (Fig. 6.1). When two photons are recorded simultaneously by a pair of detectors, the annihilation event that gave rise to them must have occurred somewhere along the line connecting the detectors. Of course, if one of the photons is scattered, then the line of coincidence will be incorrect. After 100,000 or more annihilation events are detected, the distribution of the positron-emitting tracer is calculated by tomographic reconstruction procedures from

OCR for page 105
Page 107 Figure 6.1. Arrangement of PET detectors in a ring camera surrounding the patient port. the recorded projection data: where P is the projection data, a(s) is the linear attenuation coefficient for 511-keV g-rays, and f(s) is the isotope distribution. Section 14.1.2 provides another look at the origin of this equation. The exponential factor in equation 6.1 is required as a means to take into account the attenuation of the two g -rays inside the patient's body. The effect of scattered radiation, although significant in certain imaging situations, is ignored in equation 6.1. PET reconstructs a two-dimensional image from  the one-dimensional projections seen at different angles. Three-dimensional reconstructions can also be done using two-dimensional projections from multiple angles. After appropriate corrections for both attenuation and scatter of the g -rays within the body, the resulting images give a quantitative measure of the isotope distribution. PET images, therefore, can be used both to qualitatively assess the site of unusual tracer accumulations (e.g., in tumors) and to quantitatively measure the tracer uptake for a more in-depth diagnosis or staging of the disease.

OCR for page 105
Page 108 6.2     Current Status of PET Technology 6.2.1     g -Ray Detectors Efficient detection of the annihilation photons from positron emitters is usually provided by the combination of a crystal, which converts the high-energy photons to visible-light photons, and a photomultiplier tube that produces an amplified electrical current pulse proportional to the number of light photons interacting with the photocathode. The fact that the imaging system's sensitivity in measuring coincident events is proportional to the square of the detector efficiency leads to a very important requirement that the detector be nearly 100% efficient. Thus, other detector systems such as plastic scintillators or gas-filled wire chambers, with typical individual efficiencies of 20% or less, would result in a coincident efficiency of only 4% or less. Most modern PET scanners are multilayered with up to 63 levels or transaxial layers to be reconstructed. In the most common current application, tungsten or lead septa between the layers of detector elements reject events between different rings of detectors and thus decrease the sensitivity of the instrument, but also help to reject events in which one or both of the 511-keV photons suffer a Compton scatter within the patient. The "individually coupled" detector design with one phototube per crystal is capable of very high data throughput, since the design is parallel (all photomultiplier tubes and scintillator crystals operate independently). The disadvantage of this type of design is the requirement for many expensive photomultiplier tubes. Additionally, connecting round photomultiplier tubes of sufficiently small diameter to rectangular scintillation crystals in order to form a solid ring leads to problems in packaging. The contemporary method of packaging many scintillators for 511 keV around the patient is to use what is called a block detector design. The arrangement of scintillators and phototubes is shown in Figure 6.2. A block detector couples several photomultiplier tubes to a bank of scintillator crystals and uses a coding scheme to determine the crystal of interaction. Most block detector coding schemes use the ratio of light output between a number of photomultipliers to determine the crystal of interaction. In the example shown in Figure 6.2, four photomultiplier tubes are coupled to a block of bismuth germanate (BGO) that has been partially sawed through to form 64 "individual" crystals. The depth of the cuts is critical; that is, deep cuts tend to focus the scintillation light onto the face of a single photomultiplier tube,

OCR for page 105
Page 109 Figure 6.2. Block detector principle of operation. The position histogram of the ratios of light intensities x and y allows the identification of the crystal of interaction. whereas shallower cuts tend to spread the light over all four photomultiplier tubes. BGO, the most commonly employed scintillator in modern PET systems, has a light output of approximately 3000 photons per 511 keV. For statistical reasons this limits the number of individual crystals that can be decoded without ambiguity within a detector block that spreads the light into the area of a number of crystals. Improving the spatial resolution of block detectors without increasing the number of expensive photomultiplier tubes thus requires the use of scintillators with higher light output. A candidate that is currently under investigation is lutetium oxyorthosilicate (LSO) with a light output approximately 5 times higher than that of BGO. Another possible solution is to use higher-energy-photons (> 1 MeV) for detector calibration while allowing a certain statistical uncertainty when actually detecting 511keV g-rays. The resulting spatial detector response function will be wider than the width of the individual crystals, but due to better sampling the resulting image spatial resolution should still be improved. Another new and promising aspect in PET detector technology is the replacement of photomultipliers by less expensive avalanche photodiodes (APDs). Although it has been shown that APDs allow achievement of the necessary time and pulse height resolution, the development of large and tightly packaged avalanche photodiode arrays still represents a technological challenge.

OCR for page 105
Page 110 6.2.2 Limitations of the Spatial Resolution Aside from the obvious effect of the size of the detector elements on the spatial resolution of PET systems, there are two fundamental effects limiting the resolution of the reconstructed images: 1. The angle between the paths of the annihilation photons can deviate from 180° as a result of some residual kinetic motion (Fermi motion) at the time of annihilation. The effect on resolution of this deviation increases as the detector ring diameter increases, so that eventually this factor can have a significant effect. For instance, with a diameter of 80 cm the effect on resolution is 1.7 mm. 2. The distance the positron travels after being emitted from the nucleus and before annihilation occurs is a source of deterioration in spatial resolution. This distance depends on the particular nuclide. For example, the range of blurring for 18F, the isotope used for many of the current PET studies, is quite small (about 0.5 mm) compared with that of other isotopes. When both effects are combined and infinite resolution of the detectors is assumed, the fundamental limit for the spatial resolution in systems whose apertures allow investigation of the whole human body is about 2 mm for 18F and 5 mm for 82Rb. Commercial systems with high-resolution block detectors today achieve a spatial resolution of approximately 4 mm when using 18F. For events outside the center or axis of the tomograph, the spatial resolution will be less by a significant amount. The path of the photons from an "off-center" annihilation typically traverses more than one detector crystal, as shown in Figure 6.3. This results in an elongation of the resolution spread function along the radius of the transaxial plane. The loss of resolution is dependent on the crystal density and the diameter of the tomograph detector ring. For an 80-cm-diameter system, the resolution can deteriorate by about 20% from the axis to 10 cm off axis. Any improvement in the resolution of the detectors beyond the currently achieved 4 mm, therefore, is meaningful only if it is combined with a method to determine the depth of interaction for each event within the individual detector crystals.

OCR for page 105
Page 111 Figure 6.3. Loss in radial spatial resolution for off-axis events due to oblique incidence angle for g-rays on the detector elements. (Courtesy Thomas F. Budinger, Lawrence Berkeley National Laboratory.) 6.2.3 System  Electronics The electronics must be able to determine coincident events with about 10-ns resolution for each crystal-crystal combination (i.e., line of response). The timing requirement is set jointly by the time of flight across the detector ring (3 ns) and the crystal-to-crystal time resolution (typically 4 ns). The most stringent requirement, however, is the vast number of lines of response in which coincidences must be determined (over 1.5 million in a 24-layer camera with septa in place, and 18 million with the septa removed in order to also evaluate events between different detector rings). It is obviously impractical to have individual coincidence circuitry for each line of response, and so tomograph builders use parallel organization to solve this problem. A typical method is to use a high-speed clock to mark the arrival time of each 511-keV photon based on this time marker. This search can be done extremely quickly by having multiple dedicated processors working in parallel. A modern commercial PET system employs a total of approximately 18,000 detector crystals grouped into about 300 block detectors. Three or four rings of block detectors allow an axial field of view of 15 cm to be measured without repositioning the patient. In a typical high-resolution scanner the individual detector crystals form 32 rings of, for example, 576 crystals in each ring. To improve the reliability and reduce the cost of the system  electronics, CMOS application-specific integrated circuits (ASICs)

OCR for page 105
Page 112 have been developed for the front-end analog electronics, the position/energy logic of each detector block, and the parallel coincidence detection processors. Only by using such highly integrated electronics has it become possible to develop clinical PET scanners that have all the electronics circuitry inside the scanner gantry and a total power dissipation of only about 1 kW. If two annihilation events occur within the time resolution of the tomograph (e.g., 12 ns), then ''random" coincidence events add erroneous background activity to the data; this can be significant at high count rates. These events can be corrected for on a line-of-response by line-of-response basis. The random coincidence event rate of each crystal pair is measured by observing the rate of events in a delayed coincidence timing window. The random event rate for a particular line of response corresponding to a crystal pair is proportional to the event rates of each crystal and the coincidence window width. As the activity in the subject increases, the event rate in each detector increases. Thus, the random event rate will increase as the square of the activity. The detected coincidence events need to be stored during acquisition at rates of up to 3 million per second. Rather than storing the events in chronological sequence (list mode), most modern systems provide real-time histogramming circuitry that allows data accumulation in projective views of the subject as required for the subsequent reconstruction. For each incoming coincidence pair, the correct address in projection space is determined in real time and one event is added to the corresponding memory location. This circuitry allows correction of random events in real time by subtracting the delayed coincidence pairs or "randoms" from the appropriate projection space data address. With real-time histogramming the data are available for immediate reconstruction after the end of the data acquisition without having to go through a time-consuming off-line sorting procedure. 6.2.4 Data Correction and Reconstruction Algorithms Before reconstruction, each projection address or line of response receives three corrections: for crystal efficiency, attenuation, and dead time. The efficiency for each line of response is computed by dividing the observed count rate for that line of response by the average count rate for lines of response with similar geometry (i.e., length). The efficiency typically is measured using a radioactive ring or rotating rod source without the patient in place. Once the patient is in position, a transmission scan is taken with the same source and the attenuation factor for each line of response is computed

OCR for page 105
Page 113 by dividing its transmission count rate by its efficiency or "blank" count rate. If rotating rod sources are used instead of a ring source to acquire the blank and transmission scan, it is possible to geometrically distinguish lines of response that constitute the rod sources from other coincidence events that are due to scatter or background activity in the patient. In clinical practice this capability allows the transmission scan to be taken after the isotope injection and after the emission scan, which represents significant time savings, since the take-up time for the radiopharmaceutical may be as long as 1 hour. After transmission and emission data have been collected, the random-corrected emission rate is divided by the attenuation factor and efficiency for each line of response. The resulting corrected projection data are reconstructed, usually with the filtered backprojection algorithm. This is the same algorithm used in x-ray computed tomography. The projection data are formatted into parallel or fan-beam data sets for each angle. These are modified by a high-pass filter and backprojected. Most of the conventional reconstruction techniques for implementing the inverse Radon transform are derived from the Fourier projection theorem. These methods involve application of a ramp-like frequency filter to the Fourier coefficients of each projection. After inverse transformation, the projections are backprojected and superposed to form the reconstructed image. The process of PET reconstruction is linear and is shown by operators successively operating on the projections P: where f is the image, F is the Fourier transform, R is the ramp-shaped highpass filter, F -1 is the inverse Fourier transform, and BP is the backprojection operator. An alternative class of reconstruction algorithms involve iterative solutions to the classic inverse problem where P is the projection matrix, f is the matrix of the true image data being sought, and A is the projection operation. The inverse, f = A-1P, is computed by iteratively estimating the data f' and modifying the estimate by comparison of the calculated projection set P' to the true observed projections P. The expectation-maximization algorithm solves the inverse

OCR for page 105
Page 114 problem by updating each pixel value f i in accord with where p is the measured projection, aij is the probability that a source at pixel i will be detected in projection detector j, and k is the iteration (cf. section 14.1.2). 6.3 Three-Dimensional Acquisition and Reconstruction 6.3.1 Principle of Three-Dimensional Acquisition The design of a multislice tomograph with interplane septa restricts the percentage of annihilation photon pairs detected from a central line source to less than 0.5% in the direct planes (i.e., acquiring coincidence data only within the same crystal ring) and an additional 0.75% in the cross planes (i.e., using the detector pairs of adjacent rings to form interplane slices). The most efficient use of the available photon flux can be realized by removing the interplane septa, which usually shield the detectors from the more oblique lines of response, and reprogramming the data acquisition hardware to accept all possible coincidence events (Fig. 6.4). This three-dimensional data acquisition technique requires the implementation of a three-dimensional algorithm for image reconstruction and the solution of problems associated with increased scatter and increased random coincidences. The increase in sensitivity, however, is very large, up to a factor of four to five owing primarily to the increased number of lines of response. The removal of the shadowing effect of the septa on the detectors further improves sensitivity. As a result, the dose of the radioisotope injected into the patient can be reduced, or the data acquisition time can be shortened to improve the patient's comfort without loss of image quality. 6.3.2 Three-Dimensional Reconstruction PET systems allowing fully three-dimensional acquisition have a geometry equivalent to that of a truncated cylindrical detector. In a continuous model of such scanners, the integral of the unknown tracer distribution is measured along all straight lines that intersect the truncated cylinder in two points.

OCR for page 105
Page 115 Figure 6.4. If the septa are removed to allow the acquisition of a larger number of lines of response in three-dimensional mode, the sensitivity increases by up to a factor of five. (Reprinted with permission from Cherry, S.R., Dahlbom, M., and Hoffman, E.J., 3D PET using a conventional multislice tomograph without septa, J. Comput. Assisted Tomography 15 (1991), 655-668. Copyright 1991 by Lippincott Raven Publishers.) These line integrals can be arranged in groups of parallel lines, to form a set of two-dimensional parallel projections of the object. This three-dimensional data set has three key properties: 1. Two-dimensional projections of the object are not measured for all orientations in space. The reconstruction, therefore, is a limited-solid-angle problem. This problem, however, has a unique and stable solution and therefore differs in nature from the two-dimensional limited-angle problem, which is known to be severely ill-posed (see section 14.4). 2. Mathematically, the data are highly redundant because a subset of the data, corresponding to the straight projections equivalent to the data measured in two-dimensional mode, is in principle sufficient to recon-

OCR for page 105
Page 116 struct the image. However, the improvement in sensitivity allowed by three-dimensional acquisition would be useless if only a subset of the data were incorporated into the reconstruction. Three-dimensional algorithms, therefore, all attempt to use the whole set of redundant data. 3. Most two-dimensional parallel projections are truncated: for all orientations where the lines of response are not perpendicular to the scanner axis, the integral of the tracer distribution is not measured for all lines crossing the field of view. All algorithms available today solve the truncation problem by exploiting, implicitly or explicitly, the possibility (property 2) of first reconstructing an estimate of the image based on a subset of the data. The reprojection algorithm of Kinahan and Rogers, a three-dimensional filtered-backprojection algorithm that has been implemented by several groups, has been tested intensively since 1989. This algorithm uses the lowerstatistics image obtained by reconstructing only the straight projection data in order to obtain the missing line integrals for the oblique data through forward projection. Tests have demonstrated an image quality equal to the quality obtained from a two-dimensional reconstruction, and they also demonstrate the ability of the reprojection algorithm to take full advantage of the increased sensitivity allowed for by the three-dimensional acquisition, i.e., the ability to translate that increased sensitivity into an improved image signal-to-noise ratio. The reprojection algorithm is an operational, fully tested solution to the three-dimensional image reconstruction problem. The reconstruction time required by the reprojection algorithm and the data storage requirements are obstacles to a more widespread application of three-dimensional acquisitions, in particular in the case of dynamic multiframe studies or whole-body acquisitions. Some relief is possible by making use of the fact that the projection data from ring tomographs are naturally oversampled in the angular direction. This allows for a reduction in data without loss of spatial resolution by adding up projection data for adjacent angles, both for the axial and transaxial angles. Using this method of data reduction, an implementation of the reprojection algorithm on eight Intel i860 processors working in parallel on different projections of the data set requires about 12 minutes in order to reconstruct a three-dimensional image set composed of 63 axial planes on a scanner with 32 rings of detector elements.

OCR for page 105
Page 117 Other algorithms have been proposed to reduce the computational burden of three-dimensional reconstruction. The direct Fourier reconstruction method can be generalized to three dimensions, using the three-dimensional central section theorem: The two-dimensional Fourier transform of a two-dimensional parallel projection of an image is equal to a central planar section through the three-dimensional Fourier transform of that image. An interesting property of the data sampling provided by cylindrical scanners is that the interpolation from the polar grid to the three-dimensional Cartesian grid is two-dimensional (as in the two-dimensional case) rather than three-dimensional as could be expected a priori. The implementation of the direct Fourier reconstruction has been investigated. Good-quality reconstructions have been achieved with a gain in speed not exceeding a factor of two, but the algorithm is rather complex, and further work is needed to determine whether sufficiently accurate interpolation schemes could be used without losing the potential speed advantage of Fourier methods. The use of interpolation based on linograms has been proposed. The FAVOR algorithm is a three-dimensional filtered backprojection algorithm that is based, as is the reprojection algorithm, on an exact analytical inversion. Although it has not been tested as intensively as the reprojection method, this algorithm appears to be as accurate, and to yield almost the same image signal-to-noise ratio for scanners with moderate axial apertures (smaller than 30°). The algorithm does not require forward projection and is about 30% faster than the reprojection algorithm. 6.3.3 Scatter Correction in Three Dimensions Because of the removal of the interplane septa, the fraction of scattered events in three-dimensional mode is as high as 40% of all events detected, as compared with only about 12-14% in two-dimensional mode with the septa in place. This necessitates improvements of the scatter correction routines that in two dimensions usually just rely on simple deconvolution methods. One way is to employ the admittedly limited energy resolution of PET detectors by simultaneously acquiring data with a lower energy window representing mostly scattered events and a higher energy window for the unscattered events. Various algorithms have been developed to estimate the contamination of the data in the lower window with unscattered events and vice versa, as well as to overcome statistical noise problems associated with a reduced number of events in each window. The dual-energy based scatter correction techniques have been demonstrated to allow a reduction of

OCR for page 105
Page 118 the scatter-induced error in three dimensions to less than 10%. An inherent advantage of the dual-energy methods is their ability to account for scatter originating from "invisible" sources outside the scanner's field of view. It is questionable, however, if this quantitatively small effect warrants the intrinsic increase in noise and added computational burden. Another class of scatter correction techniques employs model-based estimates of the scatter. The uncorrected images serve as an estimate of the source distribution, while the transmission images represent the scattering medium. An estimate of the scatter contamination in the projection data then can be computed using the Klein-Nishina formula (an equation that describes Compton scatter) in a calculation of the scatter distribution. Tests of these methods have also been shown to correct the scatter related error in three-dimensional images to within about 5%. The forward calculation based techniques for scatter correction have not been attempted earlier, because of their computational complexity. With the computing power available now for three-dimensional reconstruction, however, such calculations can be performed with acceptable computation times, if appropriate sampling techniques are used. 6.3.4 Attenuation Correction in Three Dimensions Transmission scans in three dimensions suffer from the fact that, because of the increase in count rate associated with the removal of the septa, significantly weaker transmission sources have to be employed in order to keep the detector dead timel within acceptable limits. The gain in sensitivity and scan time of a factor of four to five in three dimensions has to be given up almost completely when using transmission sources with only a fraction of the activity. One way to overcome this problem is to avoid using detectors close to the transmission sources, which would be most affected by high-dead-time loss, and to measure only the g-rays that have passed through the patient and reached the opposite side of the gantry without coincidence detection. Since this approach eliminates the effect of "electronic collimation," other means have to be used to determine the second coordinate of the "line of response." This can be easily accomplished by using a radioactive point source rotating around the gantry aperture. In order to measure all lines of response necessary for three-dimensional attenuation correction, the point source also has to be stepped through all axial planes prescribed by 1 The sensitivity loss at high count rates due to the event processing time in the r-ray detector and the associated electronics.

OCR for page 105
Page 119 the detector rings of the system. It has been shown that the transmission count rates that can be achieved with such a point source arrangement can be increased by more than a factor of 10 above conventional coincidence based transmission scans in three dimensions. 6.4 Research Opportunities To summarize, the following major areas of scientific or technological development appear to bear the greatest potential of enhancing the applications and clinical usefulness of PET: · Cost-effective g-ray detectors with high spatial, time, and energy resolution and the capability for measuring the angle and depth of interactions; · Effective and fast means of acquiring low-noise transmission data; · Mathematical techniques to take advantage of improved detector technologies, such as the possible inclusion of time-of-flight information into the reconstruction process; · Fast three-dimensional reconstruction algorithms, in particular for fast dynamic or whole-body studies; · Fast and quantitatively correct iterative reconstruction algorithms for two- and three-dimensional reconstructions; · Inexpensive and powerful reconstruction processors to accommodate the above; and · Fast real-time sorting electronics for data acquisition, and efficient data storage and handling capabilities for the vast amount of projection data in three-dimensional and whole-body studies. 6.5 Suggested Reading 1. Anger, H.O., Gamma-ray and positron scintillator camera, Nucleonics 21 (1963), 56. 2. Bailey, D.L., 3D acquisition and reconstruction in positron emission tomography, Ann. Nucl. Med. 6 (1992), 123-130.

OCR for page 105
Page 120 3. Budinger, T.F., PET instrumentation, in The Biomedical Engineering Handbook, J.D. Bronzino, ed., CRC Press, Boca Raton, Fla., 1995, 1140-1150. 4. Casey, M.E., and Nutt, R., A multicrystal two-dimensional BGO detector system for positron emission tomography, IEEE Trans. Nucl. Sci. 33 (1986), 460-463. 5. Grootoonk, S., Spinks, T.J., Kennedy, A.M., Bloomfield, P.M., Sashin, D., and Jones, T., The practical implementation and accuracy of dual window scatter correction in a neuro-PET scanner with the septa retracted, IEEE Conf. Med. Imaging 2 (1992), 942-944. 6. Kinahan, P.E., Rogers, J.G., Harrop, R., and Johnson, R.R., Threedimensional image reconstruction in object space, IEEE Trans. Nucl. Sci. 35 (1988), 635-638. 7. Kiibler, W., Ostertag, H., Hoverath, H., Doll, J., Ziegler, S., and Lorenz, W., Scatter suppression by using a rotating pin source in PET transmission measurements, IEEE Trans. Nucl. Sci. 35 (1988), 749752. 8. Ollinger, J.M., and Johns, G.C., Model-based scatter correction in three dimensions, IEEE Conf. Med. Imaging 2 (1992), 1249-1252. 9. Ter-Pogossian, M.M., Phelps, M.E., Hoffman, E.J., and Mullani, N.A., A positron-emission transaxial tomograph for nuclear imaging (PETT), Radiology 114 (1975), 89-98. 10. Townsend, D.W., and Defrise, M., Image reconstruction methods in positron tomography, Lectures Given in the Academic Training Program of CERN, Report No. 93-02, CERN, Geneva, 1993.