ASSESSMENT OF VISION IN INFANTS AND CHILDREN
The testing of vision in infants and children has been treated separately from the testing of adults because infants and children often cannot be tested with the same materials and techniques as adults. In addition, the course of visual and cognitive development must be taken into account in evaluating infants’ and children’s visual abilities, and special techniques often must be used, especially to test infants and preschoolers, that cannot be held to the same standards that apply to tests for adults. The testing of children’s vision is important to SSA because Title XVI of the Social Security Act provides for SSI benefits for children with disabilities, and acceptable methods must be specified for determining disability in this population. This chapter reviews the major issues in testing infants’ and children’s visual acuity, fields, and contrast sensitivity and offers some recommendations for testing to ensure fair evaluation of their visual abilities.
A major difficulty in assessment of vision in infants is that they cannot be tested with the standard tools that are used with adults. A second difficulty is that studies have shown that even normal infant vision is greatly inferior to that of normal adults. Thus, adult standards are not appropriate for use with infants. A third difficulty in determining the visual status of infants is that their vision is not static; it generally improves rapidly during the first postnatal year. In both normal and visually at-risk infants, the time course of the measured improvement in vision depends on both the assessment technique used and the aspect of vision that is being assessed. Finally, assessment of vision in infants is complicated by the fact that evidence of normal or abnormal visual status at one age is not necessarily predictive of what the visual status will be at a later age. That is, visual development during infancy is highly plastic and can be interrupted or modified by either external or internal environmental factors.
Because of the immaturity of the infant’s visual system and the dynamic nature of visual development during the first postnatal months, any program for the assessment of visual status in infants must recognize two important points. First, the results of visual assessment must be compared with normative data from infants of the same age, tested with the same assessment tool. Comparing results to norms based on data from adults or older children or to infants tested with a different procedure can lead to a misdiagnosis of visual impairment. Second, results of visual assessments conducted during infancy are not necessarily predictive of visual status later in life. An infant whose vision appears normal early in life may later show visual impairment if the visual system fails to undergo the considerable amount of development that normally occurs between infancy and adulthood. Similarly, some infants who appear visually impaired early in life show normal visual responses several weeks or months later.
Existing Social Security Administration (SSA) regulations appear to recognize these two points. In defining “marked” and “extreme”
limitations, the regulations indicate the importance of evaluating the young child relative to test norms and to age. In section §416.926a Functional equivalence for children of 20 CFR Ch. III (4-1-99 Edition), “marked” and “extreme” limitations are defined relative to test norms, with a marked limitation being a score that is ≥2 but <3 standard deviations below the norm for the test, and an extreme limitation being a score that is ≥3 standard deviations below the norm. What is implied but not specifically stated in these criteria is that the norm is specific to the age of the child.
In the same section of the SSA regulations, a second definition takes into account the age of the infants in a different way. This definition states that a “marked” limitation is present when a child between birth and age 3 years is functioning at more than one-half but not more than two-thirds of chronological age, and an “extreme” limitation is present when the child is functioning at one-half chronological age or less. This definition is problematic, because visual development does not progress linearly during infancy and early childhood; therefore, an infant of one age who is functioning at one-half chronological age may be substantially more impaired than an infant of another age who is functioning at one-half chronological age. For example, the visual acuity deficit experienced by a 6-month-old infant whose acuity is equivalent to that of a 3-month-old is substantially larger than the visual acuity deficit experienced by a one-year-old child whose acuity is equivalent to that of a 6-month-old. This is because visual acuity improves rapidly between birth and age 6 months, but it improves only slightly between ages 6 and 12 months (Mayer & Dobson, 1982; Norcia & Tyler, 1985). Existing SSA regulations also recognize the dynamic nature of visual development. SSA Publication No. 05-10026, dated October 2000, indicates that the law requires that a continuing disability review be conducted at least every three years for recipients under age 18 whose conditions are likely to improve, and not later than 12 months after birth for infants whose disability is based on low birthweight.
Between infancy, which is generally considered to end at age 1 year, and a child’s entry into the school system at age 5 to 6 years, there is a period during which the child shows considerable development, in both vision and cognitive skills. As a result, the tools that can be used to assess vision in children in the preschool-age range vary, depending on age and cognitive abilities. With toddlers, it is usually necessary to use tools similar to those developed for use with infants, but adapted to the toddler’s very short attention span. In contrast, the oldest preschool children can often be tested with assessment tools similar or identical to those used with adults.
As with infants, the changing visual and cognitive status of the young child makes it especially important that visual assessment results of preschool children be compared with results from normal children of the same age tested with the same technique. This is recognized by the Social Security Administration regulations, as described above. The need to compare a child’s visual status with age-based and instrument-based norms is important for older preschool children as well as for toddlers, since even these children, who can often complete visual assessment procedures designed for adults, typically show normal results that are below those of typical adults.
As in infancy, the changing visual and cognitive status of the preschool child means that periodic review of the child’s visual abilities, as measured with the most sophisticated procedure that the child is capable of performing, is advisable. SSA recognizes the need for repeated assessment of visual status in the developing child by requiring continuing disability reviews as noted above.
In general, children of normal intelligence who have reached 5 to 6 years of age can be tested with the same procedures that are used to assess visual function in adults. However, their results are typically lower than those of adults, and therefore it is important to compare
the results of school-age children with data from normal children of the same age. In addition, it is often useful when testing the youngest school-age children to use modified procedures that permit the child to respond in a nonverbal manner.
Adults and School-Age Children Who Cannot Perform Standard Tests of Visual Function
Some adults and school-age children cannot be tested using standard adult tests of visual function, due to limitations related to language, physical, or cognitive abilities. For these individuals, useful information about their visual capabilities may be obtained by assessing them with tests designed for younger children or infants. However, it is important to recognize that (1) the results of tests designed for younger children and infants are typically less accurate than results based on tests designed for adults and (2) tests designed for younger children and infants often use stimuli (e.g., large grating targets) that may fail to reveal visual deficits that would be evident if standard stimuli (e.g., letter targets) could be used.
Visual acuity is a measure of the finest detail that can be resolved or recognized by the visual system. Visual acuity can be reduced by the optical blur produced by imperfect optics of the eye (refractive error), which can be corrected by spectacle or contact lens correction, or it can be reduced by neural deficits, which cannot be corrected optically. Because visual acuity deficits due to refractive error are correctable and therefore do not result in a disability, visual acuity assessment should be conducted with the individual wearing best optical correction. For adults, best correction is typically evaluated by manifest refraction, in which the adult judges which lenses produce optimal ability to read an eye chart. For infants, very young children, and multiply handicapped individuals with whom manifest refraction cannot be
performed, the estimate of best correction must be made using objective techniques, such as autorefraction or retinoscopy.
The visual acuity of school-age children can usually be tested using standard letter acuity tests that are designed for use with adults. Testing of preschool-age children often requires modified visual acuity tests, composed of a limited subset of letters or symbols that can be identified or matched to a card that is held by the child. Infants and children younger than 3 years usually cannot identify letters or symbols verbally or by matching. The most successful way to assess their visual acuity is through observation of their visual system’s electrophysiological responses or eye movement responses to repetitive grating (striped) or checkerboard patterns. This strategy of assessing an infant’s resolution acuity rather than his or her recognition acuity may underestimate the depth of some visual acuity deficits (e.g., amblyopia or lazy eye), but it currently provides the best method for assessing a young child’s visual capability.
Impairments of visual acuity can hinder children’s social and academic development (Hyvärinen, 1994, 1998a, 1998b). Early identification of visual impairment can assist parents, teachers, and eye care practitioners in providing suitable modifications in a child’s social and educational environment (Hyvärinen, 1994, 1998a, 1998b; Jose & Rosenbloom, 1990; Kalloniatis & Johnston, 1990; McAlpine & Moore, 1995).
Visual acuity is the one aspect of visual function for which there are well-established, validated tools for assessment of infants and young children. Furthermore, age-normative data are available for most of these assessment tools. Therefore, assessment of visual acuity is the primary method that is currently available for quantification of visual impairment in infants and preschool-age children. Although no standardized tools have yet been developed to measure the effect of visual impairment on quality of life in infants and children, results of visual acuity testing have been shown to be related to a young child’s daily activities and the way the child interacts with the environment (Katsumi et al., 1995).
Assessment in Infants
Fixation and Following
In most clinical settings, the eye care practitioner makes a qualitative assessment of an infant’s vision, based on his or her ability to show steady fixation of a target and to follow the target using smooth pursuit movements. However, the ability to fix and follow does not necessarily indicate normal visual acuity, since many older children with 20/200 or worse visual acuity fix and follow well (Day, 1990). Similarly, failure to show normal fixation and following shortly after birth is not necessarily predictive of a later visual deficit but may simply be an indicator of delayed visual maturation (Fielder et al., 1985; Illingworth, 1961).
Visual Evoked Potential
The visual evoked potential (VEP, also called the visual evoked response or VER) is an electrical signal generated by the occipital cortex of the brain in response to visual stimulation. It is recorded through one or more electrodes placed on the scalp over the visual cortex. Visual acuity can be estimated by recording VEP responses to patterned stimuli, such as phase-alternating, black and white gratings, in which the overall luminance of the target remains constant but the spatial configuration of the pattern changes. Typically, as the size of the pattern elements decreases, the amplitude of the VEP decreases, with the result that the visual acuity threshold can be estimated as the finest grating or the smallest check size that results in a measurable VEP (for details of recording and scoring techniques, see Norcia, 1994). Normative data are available for VEP acuity for infants between birth and age 1 year (McCulloch et al., 1999; Norcia & Tyler, 1985). However, use of the VEP for measurement of visual acuity in individual infants has been limited to a relatively small number of clinical sites, undoubtedly due to the expense of the equipment and the technical expertise required to conduct the test.
The advantages of using the pattern VEP for measurement of visual acuity in infants are several: (1) measurements can be made quickly, within a time span over which most infants will remain cooperative and will fixate on the stimulus; (2) the procedure requires minimal response from the infant; (3) the VEP can be a good indicator of macular function, since it is generated primarily by the area of visual cortex that receives input from the macular region; and (4) data on the distribution of acuity results in normal infants of different ages are available, making it possible to interpret an infant’s visual acuity score in terms of number of standard deviations below normal, as suggested in the current SSA regulations.
There are limitations on the pattern VEP for assessment of visual acuity in infants: (1) the testing equipment is expensive and not widely available; (2) technical expertise is required for conducting the procedure and interpreting the responses; (3) it can be difficult to obtain a measurable response from infants with such oculomotor abnormalities as nystagmus and such neuromotor abnormalities as cerebral palsy, which may cause muscle artifacts that obscure the visual signal; and (4) infants older than 9 months may resist having electrodes attached.
Forced-Choice Preferential Looking (FPL)
The basis of the forced-choice preferential looking procedure is that infants show preferential fixation of a patterned stimulus in comparison to a homogeneous field. Thus, visual acuity can be measured by observing an infant’s eye movement responses to black and white gratings paired with a gray stimulus matched to the space-averaged luminance of the gratings.
The version of the procedure that is commercially available and is most widely used to measure visual acuity in infants is the acuity card procedure (Teller et al., 1986). In this procedure, the tester shows the infant a series of gray cards, each containing a black and white grating on the left or the right of a central, small peephole. Prior to testing, the cards are arranged in a stack, face-down, proceeding from coarser
to finer gratings. The tester presents each card to the infant several times, usually rotating the card by 180° to change the left-right position of the grating from presentation to presentation. The tester, who does not know the location of the grating on each card, watches the infant’s response through the peephole and decides, based on the infant’s eye movements and looking behavior in response to repeated presentations of the card, whether the infant can resolve the grating and, if so, the location (left-right position) of the grating. After this decision has been made for a card, the tester is permitted to look at the card to confirm the location of the grating.
An infant’s visual acuity is scored as the finest grating that the tester judges that he or she can resolve. Normative data have been published for the acuity card procedure for both binocular and monocular testing of infants between birth and 1 year, as well for young children up to 3 to 4 years of age (Courage & Adams, 1990; Mayer et al., 1995; Salomão & Ventura, 1995).
The acuity card procedure has been used successfully in a wide range of clinical settings to assess grating acuity in visually at-risk infants. In a multicenter study of cryotherapy for retinopathy of prematurity (CRYO-ROP), the acuity card procedure was used to measure acuity in more than 1,300 1-year-old infants with birthweights less than 1,251 g, two-thirds of whom developed retinopathy of prematurity in the perinatal period (Dobson et al., 1994). In the multicenter Ross Pediatric Lipid Study, the acuity card procedure was used to test vision longitudinally between ages 2 and 12 months in 197 infants, in order to evaluate the effects of diet on visual function and growth (Auestad et al., 1997). There are numerous single-center reports of the successful use of the acuity card procedure to evaluate visual status in infants with ocular or neurodevelopmental abnormalities, including, for example, cerebral visual impairment (Eken et al., 1996; van Hof-van Duin et al., 1998), severe ocular disorders (Fielder et al., 1991), Down syndrome (Courage et al., 1994), and cerebral hypoxia (van Hof-van Duin & Mohn, 1984).
The advantages of using the acuity card procedure for measurement of visual acuity in infants are that: (1) measurements can be made quickly, within a time span over which most infants will remain
cooperative and will fixate the stimulus; (2) the procedure allows the tester to interact with the infant visually between card presentations, which helps to maintain the infant’s interest in the testing procedure; (3) the procedure relies on the infant’s natural eye movement responses to a patterned stimulus; (4) the procedure is easy to learn; (5) the cost of the equipment is relatively low; (6) the procedure can be used with infants of all ages, as well as with children whose developmental age is that of an infant; (7) with modifications in the positioning of the cards, the procedure can be used to test infants with oculomotor abnormalities, such as nystagmus; and (8) data are available on the distribution of acuity results in normal infants of different ages, making it possible to interpret an infant’s visual acuity score in terms of number of standard deviations below normal, as suggested in the current SSA regulations.
There are limitations on the acuity card procedure for assessment of visual acuity in infants: (1) results depend on the integrity of the tester in remaining masked to the location of the gratings on the cards during their presentation (the purpose of remaining masked is to ensure that an unbiased assessment of visual acuity status is obtained); (2) cards must be kept free of dirt and smudges that could attract the infant’s attention away from the grating target; (3) grating acuity may underestimate recognition (letter) acuity loss in infants with strabismic amblyopia or macular disease; and (4) variability of acuity scores in normal infants is greater than that reported in VEP studies of normal infants—approximately 0.2 log unit for acuity cards (Courage & Adams, 1990; Mayer et al., 1995) versus approximately 0.13 log unit for VEP (Norcia & Tyler, 1985).
Predictive Value of Results
Data are not available on the extent to which VEP measures of acuity obtained during infancy predict visual acuity during childhood, perhaps because of the limited sites at which VEP testing of infants is conducted. However, several studies have examined the extent to which acuity card results in infancy correlate with recognition acuity results during childhood.
The largest study involved a comparison of grating acuity obtained with the acuity card procedure at age 1 year and recognition (letter) acuity obtained with the Early Treatment for Diabetic Retinopathy Study (ETDRS) charts (Ferris et al., 1982) at age 5.5 years in 616 children who were participants in the multicenter CRYO-ROP study (Dobson et al., 1999). Of the 93 eyes in which vision was too poor to quantify at 1 year with the acuity cards, 90 remained without quantifiable vision at 5.5 years, and three showed measurable letter acuity of 20/400, 20/500, and 20/1600, respectively. Of the 347 eyes that had acuity at 1 year that was in the normal range, which was defined as the mean for the age ±2 standard deviations (Mayer et al., 1995; Salomão & Ventura, 1995), 84.7 percent showed acuity of 20/40 or better at 5.5 years, and none showed acuity of 20/200 or worse. Of the 193 eyes that had acuity in the below-normal range at one year (down to approximately 3 standard deviations below the mean for age), most (74.1 percent) showed acuity of 20/40 or better at 5.5 years, and only four (2.1 percent) showed acuity of 20/200 or worse at 5.5 years. Correlation analysis indicated, however, that grating acuity score at age 1 year accounted for only 2.9 percent of the variance in recognition acuity scores at 5.5 years. Thus, infants with grating acuity in the normal or near normal range at 1 year are likely to have normal recognition acuity at 5.5 years, and those with acuity too poor to be measured with acuity cards will continue to have impaired vision at age 5.5 years. However, the grating acuity score obtained with the acuity cards at age 1 year cannot be used to predict a child’s recognition acuity score upon reaching kindergarten age.
Two other single-center studies have reported similar results. Mash and Dobson (1998) compared grating acuity results during infancy (at 4, 8, and 11 months from the infant’s due date) with letter acuity results (using the letters HOTV) at age 4 years in 129 children treated in the neonatal intensive care unit for preterm birth or perinatal complications. Their data showed that 89 to 92 percent of children who had normal grating acuity during infancy showed normal letter acuity at age 4 years. However, grating acuity scores during infancy accounted for only 5 to 11 percent of the variance in letter acuity at 4 years. Similarly, Hall et al. (2000) found that normal grating acuity during infancy was highly predictive of normal recognition acuity scores at ages 3 to 10 years in infants at risk for visual disorders.
However, when individual pairs of scores were considered, there was no significant correlation between early grating acuity and later recognition acuity.
Assessment in Preschool-Age Children
While it is difficult to test children under 5 years of age with adult letter visual acuity charts, such as the ETDRS charts (Ferris et al., 1982), tests have been developed that are more “child friendly” yet meet many of the requirements set forth by the Committee on Vision (National Research Council, 1980) for assessment of visual acuity in adults. A recent report from the Maternal and Child Health Bureau/ National Eye Institute-sponsored task force on preschool vision screening (Hartmann et al., 2000) illustrates three of these tests: the HOTV letter chart, the Lea symbols chart (which uses four symbols: house, heart, square, and circle), and the tumbling E chart.
In the illustrations shown in the task force report, each of the charts contains lines of five letters or symbols each, with the distances between symbols and between lines spaced in logarithmic steps, similar to the ETDRS charts. An advantage of both the HOTV and Lea symbols charts is that they use left-right symmetric optotypes, which overcome the young child’s difficulty with horizontal laterality (Graham et al., 1960; Rudel & Teuber, 1963; Wohlwill, 1960). In addition, a near visual acuity version of the Lea symbols chart is available, which permits assessment of visual acuity at 40 cm.
Two other tests that use left-right symmetric letters, with a logMAR progression in letter size, are the Glasgow acuity cards (McGraw & Winn, 1993) and the BVAT (Mentor, Inc.) crowded HOTV test. Each Glasgow acuity card contains four of six letters (X, V, O, H, U, and Y), with the four letters surrounded by a crowding bar. In the BVAT crowded HOTV test, single letters (H, O, T, and V) are presented surrounded by crowding bars, with logarithmic steps between letter size presentations. The crowding bars surrounding the single letters in the HOTV test help to prevent the overestimation of visual acuity that
occurs in certain types of visual abnormality, such as amblyopia, when acuity is tested with single letters (Flom, 1991).
Another advantage of the HOTV and Lea symbols tests, as well as the Glasgow acuity cards, is that a lap card is available for each test, so that the child who is reluctant to identify the letters or symbols verbally can identify the symbols by pointing to them on the lap card. This same strategy can be used with neurodevelopmentally delayed older children and adults whose cognitive or literacy skills prevent them from being tested with standard adult letter acuity charts.
Success rates for 3- and 4-year-old children have been reported to be poor for the tumbling E test (Friendly, 1978), higher for the HOTV chart (Friendly, 1978; Hered et al., 1997), and highest for the Lea symbols charts (Hered et al., 1997). Unfortunately, however, large-scale normative data are not available for preschool-age children tested with any of these logMAR tests, although published screening recommendations state that children in this age range should be able to identify optotypes on the 20/40 line (American Academy of Pediatrics Committee on Practice and Ambulatory Medicine, Section on Ophthalmology, 1996; Hartmann et al., 2000).
Success rates for assessment of recognition acuity in children less than 3 years of age are very low (McDonald, 1986), due to the inability of young children to identify or match letters or symbols. In addition, it is difficult to get children in this age range to sit still and cooperate for electrophysiological (VEP) measurement of resolution (grating) acuity.
The only quantitative methods that have been used successfully for assessing visual acuity in substantial numbers of children between 1 and 2 years of age are rapidly conducted forced-choice preferential looking measures of resolution (grating) acuity, such as the Teller acuity card procedure (McDonald et al., 1986). Normative data for children between 1 and 4 years of age have been published by several groups (Heersema & van Hof-van Duin, 1990; Courage & Adams, 1990; Mayer et al., 1995; Salomão & Ventura, 1995), making it possible to interpret a child’s visual acuity score in terms of number of standard deviations below normal, as suggested in the current SSA regulations.
Assessment in School-Age Children
As discussed in Chapter 2, the standard method of visual acuity assessment in adults is a logMAR chart, such as the Bailey-Lovie chart (Bailey & Lovie, 1976) and the Early Treatment for Diabetic Retinopathy Study (ETDRS) charts (Ferris et al., 1982). These tests have also been used successfully in studies of school-age children.
In a study of 106 10-year-old children with no ocular abnormalities who were tested with ETDRS charts, Myers et al. (1999) reported a mean monocular distance visual acuity of −0.009 logMAR (20/19.6) in the right eye and −0.004 (20/19.8) in the left eye, with a standard deviation of approximately one logMAR line (0.082 and 0.090 log unit for the right and left eyes, respectively). In a study of younger children (n = 31, 5.5 to 7 years of age) with no ocular or cerebral pathology who were tested with the Bailey-Lovie chart, Dowdeswell et al. (1995) reported a mean monocular acuity of 0.10 logMAR (20/25.2), with a standard deviation of 0.08 log unit.
The multicenter CRYO-ROP study reported successful use of ETDRS charts in a group of over 200 5.5- to 6-year-old very low birthweight children (mean birthweight, 800 g, SD 165; mean gestational age 26.3 weeks, SD 1.8), who were at risk for visual deficits due to severe retinopathy of prematurity (Cryotherapy for Retinopathy of Prematurity Cooperative Group, 1996). After excluding 56 cryotherapy-treated eyes and 85 control eyes judged to have no quantifiable pattern vision, an ETDRS acuity score was obtained for 116/177 (65.5 percent) of treated eyes and 90/145 (62.1 percent) of control eyes in this group of very premature children, many of whom had significant developmental delay (Msall et al., 2000). At age 10 years, ETDRS monocular distance acuity scores were obtained for 144 (91.7 percent) of 157 treated eyes and 106 (90.6 percent) of 117 control eyes that were sighted (Cryotherapy for Retinopathy of Prematurity Cooperative Group, 2001c).
Dowdeswell et al. (1995) also used a logMAR (Bailey-Lovie) chart to measure distance visual acuity in young, school-age children (5.5 to 7 years) who were born prior to term (<32 weeks gestation).
Monocular acuity results were successfully obtained in 65 (95.6 percent) of the sample of 68 children.
For near acuity, versions of both the Bailey-Lovie and ETDRS charts are available for assessment. Myers et al. (1999), who tested 106 healthy, full-term 10-year-old children with the near ETDRS charts, reported a mean monocular near visual acuity of −0.011 logMAR (20/19.5) in the right eye and −0.018 (20/19.2) in the left eye, with a standard deviation of approximately one logMAR line (0.10 and 0.11 log unit for the right and left eyes, respectively). In their study of 5.5- to 7-year-old healthy children tested with the near Bailey-Lovie chart, Dowdeswell et al. (1995) reported a mean monocular near acuity of 0.045 logMAR (20/18), with a standard deviation of 0.12 log unit.
Dowdeswell et al. (1995) reported that out of their group of 68 children 5.5 to 7 years of age who were born more than 8 weeks prior to term, 59 (86.8 percent) were able to complete near acuity testing of each eye. Among very low birthweight children with severe retinopathy of prematurity who were tested at age 10 years in the CRYO-ROP study, ETDRS monocular near acuity scores were obtained in 144 (91.7 percent) of 157 treated eyes and 105 (90.5 percent) of 116 control eyes that were sighted (Cryotherapy for Retinopathy of Prematurity Cooperative Group, 2001c).
In the CRYO-ROP study, children were provided with a lap card containing large (6-cm high) examples of the 10 letters that appear on the ETDRS charts (Cryotherapy for Retinopathy of Prematurity Cooperative Group, 1996, 2001c). This permitted children to match (point to), rather than verbally identify, the letters on the ETDRS charts.
Assessment in Those Who Cannot Perform Standard Tests
Registry data indicate that, in general, over half of children who have visual impairments also have other impairments, including mental retardation, cerebral palsy, hearing impairments, and epilepsy (Yeargin-Allsopp et al., 1992; Johnson-Kuhn, 1995; Ferrell et al., 1998; Viisola, 2000). In many cases, these children may be unable to perform visual acuity tests appropriate for their chronological age;
however, useful information about their visual functioning may be obtained through assessment tools designed for younger children or infants (Orel-Bixler et al., 1989; Scharre & Creedon, 1992; Haegerstrom-Portnoy, 1993; O’Dell et al., 1993; Mackie et al., 1996; Westall et al., 2000). Similarly, successful measurement of visual acuity has been reported in adults with severe cognitive impairment, through the use of the Teller acuity card procedure (Marx et al., 1990).
If possible, the visual acuity of children should be assessed with the methods that are recommended for adults, i.e., with refractive error corrected, using charts with a standard number of optotypes per line and a logarithmic progression of optotype size and spacing from line to line on the chart. Most school-age children can be tested using standard adult visual acuity charts and following the standard procedure, in which the patient identifies verbally each letter on the chart.
Many preschool-age children cannot verbally identify letters on an adult visual acuity test, and therefore modified procedures and/or charts may be required. The modifications may be as simple as providing a lap card to permit the 5-year-old to match, rather than verbally identify, the letters on an adult acuity chart. Alternatively, for the 3-year-old, it may be necessary to use familiar shapes rather than letters on the acuity chart, and to reduce the number of symbols that the child must identify during testing. Regardless of whether the preschool-age child is tested with a standard adult test, such as the ETDRS chart or Bailey-Lovie chart, or with a test designed for preschoolers, such as the Lea symbols test, it is important to compare the child’s results with the results of other children of the same age tested with the same method, rather than with the results of adults, since visual acuity typically does not reach adult levels prior to a child’s entering elementary school (Atkinson et al., 1988; Dowdeswell et al., 1995).
Measurement of visual acuity using letter or symbol optotypes is not possible in infants. However, infants’ visual acuity can be tested with electrophysiological techniques (limited availability) and behavioral techniques (more widespread availability) that use resolution acuity targets, such as a black-and-white grating or checkerboard. These techniques have been used successfully with infants and young children in both research and clinical settings. Visual acuity results from normally sighted children between birth and age 1 to 2 years show a rapid improvement over the first six postnatal months, followed by a more gradual improvement over the next one to two years. This longitudinal change in visual acuity supports SSA’s use of “number of standard deviations below age norm” in the disability determination process, as well as its requirement for periodic reassessment of the visual status of children who meet disability requirements. The fact that the longitudinal change in visual acuity is not linear, however, indicates that another SSA regulation—the one that recommends comparing the visual acuity results of a potentially visually disabled child with results of normal children of half that child’s age—is inappropriate. This is because the degree of visual impairment represented by the vision of a child who is one-half the age of the child being evaluated will differ based on the child’s age, being a smaller deficit when the child is in the 1- to 2-year age range than when the child is in the birth to 6-month age range.
Methods that have been developed for use with infants and young children have the potential to be useful for assessment of visual acuity in older children and adults who are too cognitively impaired to be tested with standard adult acuity charts. However, it is important to remember that tests that are based on eye movement responses to large grating stimuli may underestimate the visual acuity deficit of a patient with conditions that affect the macula, such as macular degeneration and amblyopia.
Issues Needing Further Study
Although there are methods available for assessment of visual acuity in children from birth through adolescence, additional research is
needed to establish age-related norms for acuity scores obtained with these methods, as well as to provide data on the reliability and validity of each method.
More research is also needed to document the level of visual acuity that represents disability among older children and adults whose neurodevelopmental status prevents them from being tested with standard adult visual acuity tests, but who can be tested with methods designed for infants and young children.
Finally, it is important that studies be conducted with children to evaluate the effect of different levels of visual acuity deficit on everyday activities and quality of life, both for children without additional impairments and for children and adults whose other impairments make it necessary for their visual acuity to be assessed with tools developed for use with infants and younger children.
The visual field is typically assessed using small spots of light that are illuminated briefly at various peripheral locations (static perimetry) or are moved inward from the periphery (kinetic perimetry) while the subject fixates on a central target. However, standard static perimetry techniques are difficult to use with children younger than about 8 years of age, and adult kinetic perimetry procedures typically cannot be used with children younger than 5 or 6 years of age.
In children, as in adults, severely restricted visual fields can have a detrimental effect on an individual’s mobility, ability to read or benefit from visually presented information, and ability to interact socially. There is a long history of using perimetry and visual field testing to evaluate the status of peripheral vision in adults in both clinical and research settings. Automated static perimetry is available in the offices of most eye care practitioners, and the limitations of restricted visual field extent and of nonseeing areas within the visual field have been widely studied. For children who are old enough cognitively to be tested in a standard adult perimeter, the results of testing can provide an accurate indicator of visual field restrictions. Quantitative
techniques for evaluating visual fields in younger children and infants, however, are available only in a small number of research and clinical settings. Thus, at the current time, quantitative evaluation of visual fields in infants and young children is not a practical means of evaluating disability in infants and preschool-age children.
Assessment in Infants
Quantitative perimetry is not widely available for assessment of visual fields in infants. Therefore, assessment of large visual field deficits in infants is usually made using confrontation techniques. The examiner faces the infant and attracts his or her attention centrally. Then an assistant introduces a toy or a light into the far periphery, and the examiner watches to see if the infant makes a rapid eye or head movement in the direction of the peripherally presented toy or light. A deficit that can be detected by this method is likely to be functionally significant in the future (Day, 1990).
White Sphere Kinetic Perimetry
Techniques for quantitative perimetry in infants are available, but their use has been primarily in research settings. The most widely used is the white sphere kinetic perimetry procedure (Mohn & van Hof-van Duin, 1986), in which an infant is induced to fixate on a centrally located white sphere while an assistant moves a second white sphere centrally from the far periphery along one of the arms of a single- or double-arc black perimeter. An observer hidden behind a black curtain watches to make sure that the infant is looking centrally at the beginning of each trial and indicates when the infant makes an eye movement away from center. The location of the peripheral white sphere when the infant makes an eye movement toward that target is used as an estimate of visual field extent along that perimeter arm. Normative data, available for infants between birth and 12 months of
age, indicate that a gradual enlargement of the measured visual field from approximately 30° in each direction to nearly adult levels occurs during this time period (Mohn & van Hof-van Duin, 1986; van Hof-van Duin et al., 1992).
Overall, the advantages of white sphere kinetic perimetry include the availability of normative data against which to compare the results of visually at-risk infants, the use of relatively simple equipment, the ease with which the procedure can be used with any infant who has sufficient vision to fixate on a central target, and the quantitative nature of the test results. Disadvantages include the lack of availability of testing equipment in most clinical settings, the need for two adults (an observer plus an assistant to present the peripheral target), the imprecision of the test results due to the limited attention span of the infant for repeated presentations of the peripheral target, and the continued presence of a central target, which may interfere with some infants’ ability to respond when the peripheral target is presented.
Several research labs have conducted studies of infants using static perimetry, in which the infant’s eye movement responses are observed when a stationary stimulus is presented at different locations in the peripheral field (Lewis & Maurer, 1992; Harvey et al., 1997c). The advantages of static perimetry include the ability to extinguish the central fixation target during presentation of the peripheral stimulus, as well as the ability to identify precisely the location of the peripheral target when it was looked at by the infant. The major disadvantage of static perimetry is that strategies have not yet been devised for eliciting enough trials from an individual infant to quantify that infant’s visual field status (Maurer & Lewis, 1991).
Perimetry in Visually At-Risk Infants
White sphere kinetic perimetry has been used in a number of studies of visually at-risk infants, including those with retinopathy of
prematurity (Fetter et al., 1992), perinatal asphyxia (Luna et al., 1995), periventricular leukomalacia (Scher et al., 1989), and intraventricular hemorrhage (Harvey et al., 1997b). Data from a limited number of longitudinal studies of at-risk infants suggest that normal visual field extent in early infancy is not necessarily predictive of normal visual field extent in later infancy or early childhood, but that restricted visual fields in early infancy are usually, but not always, predictive of later visual field deficits (Harvey et al., 1997b; Luna et al., 1995).
As in assessment of other aspects of vision in infants and young children, it is important to compare visual field data from visually at-risk infants with data obtained from normal infants of the same age tested with the same procedure, since the age at which measured visual field extent reaches adult levels is highly dependent on characteristics of the stimuli used during testing (Mohan & Dobson, 2000).
Assessment in Preschool-Age Children
The only quantitative method that has been widely used to assess visual field extent in preschool children who cannot cooperate for perimetry using standard adult procedures is the white sphere kinetic perimetry technique that was developed for use with infants (Mohn & van Hof-van Duin, 1986). The technique has the advantage that normative data are available for preschool-age children (Quinn et al., 1991; Wilson et al., 1991; van Hof-van Duin et al., 1992). In addition, the technique has been used successfully to assess visual field extent in at-risk preschool children, including single-center studies of children who experienced intraventricular hemorrhage (Harvey et al., 1997b), perinatal asphyxia (Luna et al., 1995), bronchiopulmonary dysplasia (Harvey et al., 1997a), periventricular leukomalacia (Cioni et al., 2000), and cerebral visual impairment (van Hof-van Duin et al., 1998), and a multicenter study of visual field extent in 5.5-year-old children who had undergone cryotherapy for severe retinopathy of prematurity (Quinn, Dobson, et al., 1996).
The disadvantages of the white sphere kinetic perimetry technique are that it is personnel-intensive and not widely available in clinical
settings. Therefore, clinical assessment of visual fields in preschool children who cannot be tested with Goldmann perimetry is generally limited to confrontation techniques.
By permitting the tester to use the child’s eye movement responses, rather than buzzer-pressing, to indicate detection of a peripheral stimulus, successful measurement of visual field extent using Goldmann perimetry has been accomplished in both normal and visually at-risk children between 3 and 5 years of age (Cummings et al., 1988; Mayer et al., 1991; Quinn et al., 1991; de Souza et al., 2000). Although normative Goldmann perimetry data for preschool children have not been published, data from Quinn et al. (1991) show that visual field extent, as measured with Goldmann perimetry, increases between age 4 and 10 years. This means that data obtained from visually at-risk preschool-age children tested with the Goldmann perimeter should be compared with data from normal children of the same age, and not with normative data from adults.
Although automated static perimetry is used routinely to measure the sensitivity of the central 30° of the visual field of adults in both clinical and research settings, successful use of automated static perimetry in children younger than age 5 years has not been reported.
Assessment in School-Age Children
Goldmann perimetry has been used successfully in a number of studies to measure visual field extent in normal, school-age children (Lakowski & Aspinall, 1969; Liao, 1973; Quinn et al., 1991; Matsuo et al., 1998; Myers et al., 1999). Several investigators have reported developmental increases in measured visual field extent in school-age children. Quinn et al. (1991) showed an increase in measured visual field extent in children between ages 4 and 10 years, as did Lakowski and Aspinall (1969) for a group of 6- to 11-year-old children and Liao (1973) for a group of 6- to 12-year-old children. It is unclear whether this developmental increase in measured visual field extent is the
result of sensory maturation or is due to other factors, such as age-related improvements in response time, cognitive processing, or attentional abilities. Nevertheless, the finding of age-related differences in measured visual field extent highlights the importance of using age-based norms when deciding whether a child’s visual field extent is within the normal range.
Goldmann perimetry has been useful in measurement of visual field extent in school-age children with a variety of visual disorders, including severe retinopathy of prematurity with or without peripheral retinal ablation (Takayama et al., 1991; Quinn, Miller, et al., 1996; Cryotherapy for Retinopathy of Prematurity Cooperative Group, 2001b), aphakia following removal of unilateral or bilateral dense, central cataracts (Bowering et al., 1997), congenital glaucoma (de Souza et al., 2000), and visual field loss from use of the drug Vigabatrin to treat epilepsy (e.g., Vanhatalo et al., 1999; Wohlrab et al., 1999; Russell-Eggitt et al., 2000).
In standard Goldmann perimetry, the person being tested is required to press a buzzer to indicate the appearance of a peripheral target. Because this response can be difficult for young children, several investigators have reported using young children’s eye movements away from the fixation target to indicate detection of the peripheral target (Cummings et al., 1988; Mayer et al., 1991; Quinn et al., 1991; Quinn, Miller, et al., 1996). Data from children ages 4 to 10 years (Quinn et al., 1991) and adults (Mayer et al., 1991) indicate no significant differences in measured visual extent when a buzzer or eye movements were used to indicate detection of the peripheral target.
Automated Static Perimetry
The first reported use of automated static perimetry in normal school-age children was by Bowering et al. (1997, 1993). These researchers used an Octopus 500 perimeter to measure the sensitivity of 7-, 8-, and 9-year-old children and adults to a 0.43° light presented at approximately 20° in the nasal field or 30° in the temporal field. The results showed no significant change in sensitivity with age, but there
was a tendency for greater variability in the sensitivities of the younger children than in those of older children and adults.
Recently, Safran, Tschopp and colleagues reported a series of carefully conducted studies of the feasibility, validity, and normative values for testing 5- to 8-year-old normal children with the Octopus 2000R automated perimeter (Safran et al., 1996; Tschopp, Safran, et al., 1998a, 1998b; Tschopp, Viviani, et al., 1999). The results indicated that, following a specially designed training phase, 80 percent of 5-year-olds and all children ages 6 through 8 years were able to complete a 100-trial screening procedure (Tschopp et al., 1998a). In addition, 40 percent of 5-year-olds, 70 percent of 6-year-olds, 90 percent of 7-year-olds, and all 8-year-olds were able to complete a full quantitative evaluation, based on 200 trials or more (Tschopp et al., 1998a). Normative data indicated lower sensitivity than that of a comparison group of 24- to 30-year-old adults at 17/24 locations tested in 5-year-olds, 6/40 locations tested in 6-year-olds, 2/76 locations (both at 27° eccentricity) in 7-year-olds, and 1/76 locations (at 27° eccentricity) in 8-year-olds (Tschopp et al., 1998b). Although Tschopp et al. (1999) found that age differences in sensitivity to peripheral stimuli were related more to differences in attentiveness than to sensory differences across ages, their studies highlight the importance of comparing automated static perimetry results from at-risk children with data from normal children of the same age tested with the same equipment and procedure.
An alternative strategy for testing 6- to 12-year-old children with static perimetry was reported recently by Morales and Brown (2001). Monocular perimetry was performed on the Octopus 1-2-3 perimeter using the TOP-32 short perimetry program, with a “video games” explanation of the task and a 1-minute training trial. Although variability was higher in younger children than in older children, all 50 children in the study were able to complete the TOP-32 program in less than 3.5 minutes per eye. Specificity (normal field result in a normal child) was 78 percent for the total sample and 89 percent when data from 6- and 7-year-olds was excluded.
Automated static perimetry (Octopus 500 perimeter) was used successfully by Bowering et al. (1993) to measure visual field constriction at one nasal and one temporal location in 7- to 9-year-old
children who had been treated for a dense and central cataract in one or both eyes. Results were compared with those of normal 7- to 9-year-old children tested with the same procedure. Similarly, Kremer et al. (1995) used automated static perimetry (Humphrey perimeter) to document constriction of the visual field in the eyes of 10 children who had been treated with cryotherapy for retinopathy of prematurity between 10 and 14 years prior to testing. Recently, Donahue and Porter (2001) reported using the Swedish interactive thresholding algorithm (SITA), a new testing strategy for the Humphrey perimeter, to test visual fields in children between 6 and 17 years of age with visual field defects.
Two modifications have been used by investigators to increase the proportion of young school-age children who can be tested successfully with automated static perimetry. First, investigators have reduced the number of peripheral stimulus presentations that the child is required to complete. For example, Bowering et al. (1993) tested children with stimuli centered around one nasal and one temporal location. Tschopp et al. (1998b) tested 5-year-olds with only 32 percent of the test locations used with 7- and 8-year-olds and adults, and 6-year-olds with only 53 percent of the number of locations used with older age groups. Morales and Brown (2001) used a commercially available ultra-short program that employs a “lateral bracketing” strategy to estimate threshold sensitivities for 76 test points in the central 30° of the visual field in less than 3 minutes. A second modification used by Tschopp et al. to increase testability in young children was an extensive training protocol, in which a series of positive reinforcement procedures was used to teach the child to respond when “stars” appeared, but not to respond to sounds in the perimeter that were not accompanied by the appearance of a star. Morales and Brown found that a training session of approximately one minute was all that was needed for children to be able to complete the ultra-short Octopus TOP-32 program.
In summary, although it is possible to test many young school-age children with automated static perimetry, care must be taken to ensure that the child understands and can perform the task prior to beginning the actual measurement of sensitivity at different locations within the visual field. In addition, the short attention span of young
children may limit the degree of detail with which the sensitivity visual field can be mapped. By the time children reach ages 8 to 10 years, however, most can provide reliable data for sensitivity across the same area of the visual field that can be tested in adults.
Assessment in Those Who Cannot Perform Standard Tests
In contrast to the variety of tools that have been developed to assess visual acuity in infants and children who cannot be tested with standard adult techniques, there are no well-developed, widely available tools for assessment of visual fields in individuals who lack the physical or cognitive ability to perform kinetic or static perimetry procedures developed for use with adults. Minor modifications, such as observation of an individual’s eye movements in response to perimetry targets, can permit testing of individuals who are physically or cognitively unable to provide the standard button-press results, but estimation of deficits in the visual field of the individual with severe neurodevelopmental delay or physical disabilities that prevent use of a standard perimeter is generally limited to confrontation testing.
If possible, visual fields of children should be assessed with the method that is recommended for adults, i.e., automated static perimetry. For children who are too young to be tested with standard adult perimetry procedures, there are no widely available, quantitative perimetry techniques and therefore no standardized methods for evaluating disability related to restricted visual fields.
Issues Needing Further Study
More research is needed to develop, norm, and validate methods for assessing visual fields in children too young to be tested with standard
adult perimetry procedures. In addition, there is a need for more age-based normative data for standard adult perimetry procedures, so that the results from individual children can be compared with results from normal children of the same age, rather than with normative data from adults.
Another area in which research is needed concerns the effect of visual field deficits on activities of daily living and quality of life in children. Such investigation should include children old enough to be tested with adult perimetry procedures, as well as children and adults whose cognitive development is not sufficient to allow them to be evaluated with adult perimetry procedures.
In adults, contrast sensitivity is measured by determining the least amount of contrast an individual needs to detect a difference in luminance between adjacent parts of a pattern. Laboratory studies have used measurements of contrast sensitivity in infants to produce a simulated view of what various patterns and scenes look like to an infant (Banks & Salapatek, 1981; Teller, 1997). However, there are no widely available, normed and validated tools for assessment of contrast sensitivity in infants or preschool-age children.
While visual acuity provides a measure of the finest detail that an individual can resolve, results of contrast sensitivity testing provide information on the individual’s ability to detect patterns of all sizes, and thus they provide a more complete description of an individual’s visual environment than can be obtained from a visual acuity score. Because the world of the infant and young child is built around global perceptions, rather than attention to fine detail as is required in reading, it is likely that assessment of contrast sensitivity would provide a more accurate estimate of an infant’s or young child’s ability to function visually than would a measure of visual acuity. However, the development of techniques for assessing contrast sensitivity in infants and young children has lagged far behind the development of techniques for assessing visual acuity. Therefore, at this time, visual
acuity is the only aspect of spatial vision that can be assessed in a child too young to be tested with adult measures of contrast sensitivity.
Currently, methods to assess contrast sensitivity in adults require the individual to identify low contrast letters or to indicate, for a series of black and white gratings, the lowest contrast at which each pattern is detectable. Use of letters in the first type of test and the need for a large number of trials in the second type of test prevent either from being useful in the assessment of preschool-age children and infants.
Assessment in Infants
Although both normal infants (Adams et al., 1992) and infants with Down syndrome (Courage et al., 1997) have been tested successfully with an acuity card type of contrast sensitivity test and normative data are available for infants (Adams & Courage, 1996), the test is not ready for widespread use, due to poor test-retest reliability, long test times, and lack of commercial availability (Adams et al., 2000).
Measurement of contrast sensitivity in infants is also possible using the pattern VEP, and initial normative data are available (Norcia et al., 1990). However, it is unlikely that this technique will achieve widespread use, due to the expense of the equipment and the technical expertise required to interpret the results.
Assessment in Preschool-Age Children
The primary tests used to evaluate contrast sensitivity in adults are the Vistech chart (Ginsburg, 1984) and the Pelli-Robson charts (Pelli et al., 1988). Although the Vistech and Pelli-Robson contrast sensitivity charts have been used successfully with children as young as 5 years of age, they are not practical for use with younger children, due to the difficulty the children have in identifying grating orientation of stimuli on the Vistech chart and their inability to identify the letters used as stimuli on the Pelli-Robson charts (Rogers, Bremer, & Leguire, 1987; Scharre et al., 1990).
A potentially useful contrast sensitivity test for this age group is the low contrast version of the Lea Symbols test (Precision Vision, La Salle, IL). In this test, as in the Pelli-Robson charts, the symbols are of a constant size but contrast varies by row. Rydberg and Han (1999) reported using the low contrast version of the Lea Symbols test successfully with children between 3 years 9 months and 6 years of age who had normal vision or visual impairment due to ocular disease or amblyopia. However, normative data are not yet available for this test and, because it requires identification or matching of symbols, it is unlikely that the test would be useful for measurement of contrast sensitivity in children younger than about 3 years of age.
Another potentially useful procedure for assessment of contrast sensitivity is an “alley-running” procedure developed by Atkinson and colleagues (1981) to measure contrast sensitivity of 3- to 5-year-old children in a research setting. However, this procedure has received no follow-up development for use in clinical settings.
For children younger than age 3 years, it may be possible to measure contrast sensitivity with an acuity card procedure, similar to that used to measure grating acuity in this age range. Initial data obtained from normal 2- and 3-year-olds (Adams & Courage, 1993) indicate that it is possible to measure contrast sensitivity in children at the younger end of the preschool age range with this type of contrast sensitivity test (Adams et al., 1992). However, test times are relatively long (average = 12 min) and the cards are not yet commercially available.
Assessment in School-Age Children
Scharre et al. (1990) provided normative data on the Vistech chart for 5-, 6-, and 7-year-olds, showing that sensitivity increases with age, and that even at age 7 years, contrast sensitivity at all five spatial frequencies tested is lower than that of adults. Rogers, Bremer, and Leguire (1987) also found that Vistech contrast sensitivity in children younger than 7 years of age is lower than that of adults. Both Scharre et al. and Rogers et al. attempted to test children younger than 5 years
of age, but they reported low success rates for test completion in these younger children.
Powls et al. (1997) tested 163 11- to 13-year-old, normal birthweight children using the Vistech chart and reported that results for the two lowest spatial frequencies were similar to those of adults, but that the children were less sensitive to the three highest spatial frequencies than were adults. In contrast, Fitzgerald (1989) reported that children were relatively more sensitive to high spatial frequency gratings than the adults who were tested to produce the Vistech chart norms.
Unlike the Vistech chart, which measures contrast sensitivity for individual spatial frequencies, the Pelli-Robson charts provide a single contrast sensitivity value based on multi-spatial-frequency letter targets. Using the Pelli-Robson charts, Fitzgerald et al. (1993) reported a mean binocular contrast sensitivity for 49 children ages 8 to 12 years of 1.89 log units (SD 0.97), which is within the range of values (1.75 to 1.91 log units) reported for monocular testing of young adults (Elliott, Sanderson, & Conkey, 1990; Elliott & Whitaker, 1992a; Beck et al., 1993). In contrast, Myers et al. (1999), in a study of 106 healthy, full-term 10-year-olds, reported mean monocular contrast sensitivities of 1.69 log units (SD 0.12) for the right eye and 1.66 (SD 0.11) for the left eye, lower than that typically reported for adults but similar to the mean monocular Pelli-Robson contrast sensitivity value of 1.62 (SD 0.08) reported by Dowdeswell et al. (1995) for healthy 5.5- to 7-year-old children.
Dowdeswell et al. (1995) reported obtaining monocular Pelli-Robson contrast sensitivity results in 61 (89.7 percent) of 68 children 5.5 to 7 years of age with gestational ages of less than 32 weeks. Pelli-Robson charts were also used to measure contrast sensitivity at age 10 years in the CRYO-ROP study. A measure of contrast sensitivity was obtained in 143 (91.7 percent) of 156 treated eyes and 102 (90.3 percent) of 113 control eyes that were sighted (Cryotherapy for Retinopathy of Prematurity Cooperative Group, 2001a). Results showed that eyes of children in the CRYO-ROP study were more likely to show normal contrast sensitivity in the presence of reduced visual acuity than normal visual acuity in the presence of reduced contrast sensitivity, supporting data from studies of adults, which indicate that visual
acuity and contrast sensitivity measure different aspects of visual function.
To assist children in identifying the orientation of the grating patterns on the Vistech chart, Scharre et al. (1990) provided children with a pointer that could be aligned in the same orientation as the grating pattern or with a hand-held grating pattern that could be used to match the orientation of the pattern on the chart. No studies have reported adaptations of the Pelli-Robson procedure for use with young children, but it should be possible to create a lap chart that would allow the child to match, rather than to identify the letters verbally, similar to the lap card that has been used for assessment of letter visual acuity in young children (Cryotherapy for Retinopathy of Prematurity Cooperative Group, 1996, 2001c).
Assessment in Those Who Cannot Perform Standard Tests
There are no well-developed, widely available tools for assessment of contrast sensitivity in individuals who lack the ability to identify or match the orientation of the grating stimuli on the Vistech chart or to identify or match the letters on the Pelli-Robson charts.
In children whose visual acuity is measurable but below the normal range, it would be beneficial to evaluate their overall spatial vision by assessment of their contrast sensitivity. This is possible in children who have the cognitive skills to be tested with measures of contrast sensitivity developed for use with adults. For children who are too young to be tested with standard adult contrast sensitivity measures, there are no widely available techniques for assessment of contrast sensitivity and therefore no standardized methods for evaluating disability related to deficits in contrast sensitivity.
Issues Needing Further Study
More research is needed to develop, norm, and validate methods for assessing contrast sensitivity in children too young to be tested with standard adult procedures. In addition, there is a need for more age-based normative data for standard adult contrast sensitivity procedures, so that the results from individual children can be compared with results from normal children of the same age, rather than with normative data from adults.
Another area in which research is needed concerns the effect of contrast sensitivity deficits on activities of daily living and quality of life in children. This investigation should include children old enough to be tested with adult contrast sensitivity procedures, as well as children and adults whose cognitive development is not sufficient to allow them to be evaluated with adult procedures.