National Academies Press: OpenBook

Hearing Loss: Determining Eligibility for Social Security Benefits (2005)

Chapter: 3 Assessment of the Auditory System and Its Functions

« Previous: 2 Basics of Sound, The Ear, and Hearing
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

3
Assessment of the Auditory System and Its Functions

In this chapter we discuss the methods used to assess the functioning of the adult claimant’s auditory system and hearing functions. We begin with an overview of the otolaryngological examination (for adults and children), describing the features of a “standard” examination based on best professional practices, and make some recommendations on how and when the examination should be performed. We then describe audiological tests and review current knowledge of audiological testing for adults, with special reference to the tests that are now prescribed by the Social Security Administration (SSA) for use in determining disability due to auditory impairment and to other tests that might be suitable for this purpose (testing of children is discussed in Chapter 7). Our conclusions and recommendations for Social Security disability determination are presented in Chapter 4.

STANDARD OTOLARYNGOLOGICAL EXAMINATION

SSA regulations as presented in the Blue Book (Social Security Administration, 2003) require, as part of the disability determination process, “medical evidence about the nature and severity of an individual’s impairments(s)” from either a claimant’s own physician or a consultative examiner (CE). Medical reports from either the treating physician or the CE should include (Social Security Administration, 2003, p. 11):

  • Medical history;

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
  • Clinical findings (such as the results of physical or mental status examinations);

  • Laboratory findings (such as blood pressure, x-rays);

  • Diagnosis;

  • Treatment prescribed with response and prognosis;

  • A statement providing an opinion about what the claimant can still do despite his or her impairment(s), based on the medical source’s findings on the above factors. This statement should describe, but is not limited to, the individual’s ability to perform work-related activities such as …hearing…. For a child, the statement should describe his or her functional limitations in learning, …communicating.

SSA regulations also require that hearing tests “should be preceded by an otolaryngologic examination.” In contrast, the committee recommends that the otolaryngological examination should follow the audiological examination (but by no more than 6 months), because a physician cannot provide a competent report, including the six elements listed above, without recent audiometric data. The appropriate source for the otolaryngological examination is an otolaryngologist certified by the American Board of Otolaryngology. Otolaryngologists specialize in disorders of the ear, nose, throat, and related structures of the head and neck and have completed at least five years of residency training following receipt of the M.D. or D.O. (doctor of osteopathy) degree.

Medical History

The elements of the medical history in an examination performed to provide medical evidence for SSA program eligibility are identical to those included in a routine medical examination. There are, however, some areas of emphasis worthy of mention.

Chief Complaint and Present Illness: The claimant’s chief otological complaint may not be hearing loss; it may be tinnitus, vertigo, otalgia, or otorrhea. For each significant otological symptom, but especially for hearing loss, the physician should inquire about:

  • The nature of the symptom—Are hearing difficulties noticed in one ear or both? For what types of sounds?

  • Severity

  • Chronology—Onset? Change over time? Fluctuation?

  • Exacerbating and/or ameliorating factors, such as background noise

  • Effects on activities of daily living—ordinary conversation, telephone use, work.

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

Review of Systems: Review of organ systems will occasionally reveal problems in other systems that are relevant to otological diagnosis (e.g., eye symptoms in Cogan’s syndrome). Review of systems is more likely to be helpful in cases of hearing loss with onset in childhood or in cases demonstrating rapid progression or fluctuation of hearing loss.

Past Medical History: A history of mumps or measles (or of maternal rubella or cytomegalovirus) can be relevant if hearing loss began in childhood. Head injury often causes hearing loss. Most ototoxic drugs are given in the context of hospitalization for severe infections (aminoglycosides) or cancer (cisplatin and carboplatin). A history of previous otological treatment, especially ear surgery, is always relevant.

Social History: A discussion of family and marital status includes communication difficulties with the claimant’s spouse or partner, relatives, and other persons. The claimant’s educational and occupational history will assist in understanding both the difficulties experienced in the workplace and the knowledge, skills, and abilities that might be useful in other jobs. Previous hazardous noise exposure may be uncovered in a discussion of both work history (including military service) and recreational activities (e.g., shooting, woodworking). If there has been significant noise exposure in the past 72 hours, audiometry should be deferred.

Family History: Hearing loss or deafness prior to age 60 or ear surgery in a close relative may suggest a hereditary disorder.

Clinical Findings (Physical Examination)

Informal Observation of Communication: The otolaryngologist can usually directly observe the claimant’s ability to hear and understand in a communication environment similar to some workplaces: one-to-one conversation in a relatively quiet room. It is neither necessary nor desirable to do this separately from the process of obtaining a history and performing a physical examination. Instead, the claimant’s ability to hear and understand can be assessed based on informal conversation during history taking, ear examination, etc. If the claimant has either hearing aids or a cochlear implant, these should be used during as much of the interview and examination as possible. During this process, the examiner can note whether the claimant does poorly under certain conditions (e.g., inability to see the examiner’s face, increased distance from the examiner, presence of background noise, removal of hearing aids) and whether the claimant’s behavior is consistent. Any obvious language or cognitive problems should be noted.

Otoscopy: Observation and palpation of the auricle, followed by pneumatic otoscopy, will usually suffice to detect outer ear and middle

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

ear abnormalities that may contribute to diagnosis. When conductive or mixed hearing loss is present, the ears should usually be examined by otomicroscopy.

Tuning Fork Tests (optional): Audiometric tests usually provide un-ambiguous evidence of the type of hearing loss (conductive, sensorineural, or mixed). Nevertheless, tuning fork tests (most often, Weber and Rinne tests using the 512 Hz fork) can sometimes provide a useful cross-check on the validity of the audiometric data. This is especially true if insert earphones are not used, because collapsing ear canals can cause apparent (but spurious) conductive hearing loss, which will be absent on tuning fork testing.

Head and Neck Examination: In the presence of chronic otitis media, examination of the neck, nasal cavities, and pharynx may disclose relevant findings. In congenital hearing loss, examination of the face, neck, oral cavity, and eyes may contribute to the identification of a hereditary or acquired syndrome. With these exceptions, head and neck examination is rarely helpful.

Cranial Nerves (optional): The functions of the third through twelfth cranial nerves are usually tested as part of a complete otological examination, especially if there is asymmetrical hearing loss.

Balance and Cerebellar Tests (optional): When the claimant’s complaints include vertigo, unsteadiness, or lightheadedness, the otological examination usually includes observation of the claimant’s gait; standing balance will be addressed with eyes open and closed. Tests of fine motor coordination and ability to determine the spatial position of body parts without vision are often used to assess cerebellar function.

Laboratory Findings

In most cases, the only relevant “laboratory findings” are the results of the audiological examination. In some cases of asymmetrical hearing loss, imaging tests are necessary before a firm diagnosis can be made. In rare cases of rapidly progressive or fluctuating hearing loss, blood tests for infection, immunological disorders, or other systemic disorders can be helpful. Tests of vestibular function may assist diagnosis when there are symptoms of dizziness or unsteadiness. For stable or slowly progressive symmetrical hearing loss without vestibular symptoms, laboratory tests (other than audiometry) are rarely useful in diagnosis or prognosis.

Diagnosis

The nature of the hearing loss (sensorineural, conductive, or mixed) is usually apparent from the audiometric data. The cause(s) are not always

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

certain, but the physician should state an opinion about causation to a “reasonable medical certainty.” In other words, the physician should identify a cause only if it is more likely than not that it contributed to the patient’s hearing loss.

Treatment Prescribed with Response and Prognosis

In many cases (especially when the hearing loss is conductive or mixed), medical or surgical treatment is advised. For most people applying for Social Security Disability Insurance or Supplemental Security Income because of hearing loss, medical treatment is no longer an issue. Hearing aids or cochlear implants may be advised. Prognosis (including expected response to recommended medical, surgical, or prosthetic intervention) is essential, because of the SSA standard of an “impairment which can be expected to last for a continuous period of at least twelve months.” Hearing impairment that has not been present and stable for at least 6 months will rarely meet this standard.

What the Claimant Can Still Do

Information collected during the history-taking and physical examination is combined with audiometric data and information obtained from previous medical and audiometric records to support an opinion regarding the claimant’s ability to hear and understand in a variety of communication settings.

Children

The history and physical examination described above is typical not only for adults, but also for children of school age. Some parts of the physical examination (e.g., tuning fork tests and assessment of ability to communicate) will not be feasible in very young children, who may require additional evaluations before a determination of causation and prognosis can be made. These additional evaluations may include: medical genetics, ophthalmology, pediatric neurology, and speech-language pathology.

ASSESSMENT OF AUDITORY FUNCTION

The basic audiometric test battery recommended by the committee includes assessment of pure-tone thresholds by air conduction and bone conduction, speech recognition thresholds, suprathreshold speech recognition in quiet and noise, and acoustic immittance measures. This pro-

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

tocol will enable the determination of the degree of hearing loss, the site of lesion in the auditory periphery, and the capacity of the individual to understand speech in typical listening environments. Many of the stimuli and procedures for conducting the routine assessment have been standardized for English-speaking adults. Modifications to this test battery are necessary for assessment of children and non-English speakers. Additional electrophysiological measures of auditory function include otoacoustic emissions (OAEs) and auditory evoked potentials (AEPs), which may be performed in place of, or in addition to, the routine audiometric measures. These tests are particularly useful for assessment of infants and young children, as well as individuals who are difficult to test. To ensure accuracy of test results, the test environment must be controlled and the test equipment must be calibrated as described in Chapter 4.

Pure-Tone Threshold Audiometry

Hearing sensitivity is measured separately in each ear for pure-tone signals, which are single-frequency tones generated electronically and transduced through an earphone or bone conduction vibrator. The “gold standard” of hearing sensitivity is the pure-tone audiogram, shown in Figure 3-1. The audiogram displays a listener’s detection thresholds (in dB hearing level [ANSI S3.6-1996] (American National Standards Institute, 1996) for pure-tone signals at octave frequency intervals within the range of 250-8000 Hz, in both ears. This frequency range encompasses the spectrum of speech sounds. Consequently, an average of pure-tone detection thresholds corresponds with the average threshold for speech (Fletcher, 1950). Hearing thresholds may also be assessed at selected interoctave frequencies (e.g., 3000 and 6000 Hz), particularly in cases of suspected noise-induced hearing loss.

The standard method for measuring the pure-tone detection threshold is the modified Hughson-Westlake technique (American National Standards Institute, 1997; American Speech-Language-Hearing Association, 1978; Carhart and Jerger, 1959). This is a single-stimulus technique that combines ascending and descending methods of limits. Stimulus duration is 1-2 seconds, and the signal may be steady-state or pulsed (Mineau and Schlauch, 1997). The operational definition of threshold is the lowest level at which a listener detects a signal for 50 percent of the ascending runs. Pure-tone thresholds are assessed in the air conduction mode (250-8000 Hz) and the bone conduction mode (250-4000 Hz). The preferred earphones for air conduction threshold assessment are insert earphones to prevent ear canal collapse and ostensible high-frequency conductive hearing loss that may occur with supra-aural earphones

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

FIGURE 3-1 Graphic representation of a pure-tone audiogram.

(Clemis, Ballad, and Killion, 1986). However, supra-aural earphones are acceptable for routine clinical use.

Measurement of pure-tone detection thresholds is essential for determination of the degree of hearing loss, and it forms the basis of disability determination for SSA. A classification scheme for degree of hearing loss, shown in Table 3-1, is based on the mean air conduction pure-tone thresholds at 500, 1000, and 2000 Hz (PTA 512) (Clarke, 1981; Goodman, 1965). (PTA 512 is the pure-tone average implied in the text of this report that deals with adult hearing, unless a different frequency set is specified.) This classification scheme is appropriate if the thresholds do not vary dramatically across audiometric frequency. In the event of widely varying thresholds, the degree of hearing loss should be described separately for frequencies with minimum hearing loss and maximum hearing loss.

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

TABLE 3-1 Categories of Degrees of Hearing Loss, Based on Air Conduction Pure-Tone Average at 500, 1000, and 2000 Hz

Degree of Hearing Loss Category

Pure-Tone Average Range

Normal hearing sensitivity

–10 dB HL to 15 dB HL

Slight hearing loss

16 dB HL to 25 dB HL

Mild hearing loss

26 dB HL to 40 dB HL

Moderate hearing loss

41 dB HL to 55 dB HL

Moderately severe hearing loss

56 dB HL to 70 dB HL

Severe hearing loss

71 dB HL to 90 dB HL

Profound hearing loss

91 dB HL to equipment limits

 

SOURCE: Clarke (1981).

In particular, hearing loss in adults is often more severe in the higher frequencies than in the lower frequencies.

This severity scale classification scheme attempts to describe the average communicative effect of a hearing loss without the use of a hearing aid. Individuals with normal hearing experience no significant difficulty hearing faint speech or speech in moderate noise levels. The category included in Table 3-1 of “slight” hearing loss has been applied traditionally to children, who experience detrimental effects of this degree of hearing loss for developing normal speech and language. Recent reports suggest the slight hearing loss category is appropriate for adults as well, because many adults with PTAs in this range complain of difficulty hearing faint speech in noise and may seek amplification (Martin and Champlin, 2000).

Those with mild hearing losses experience difficulty hearing faint speech or speech from a distance, even in quiet. Moderate hearing loss is associated with frequent difficulty with normal speech (Bess and Humes, 2003); conversational speech may be heard only at close range. Individuals with a moderate-to-severe hearing loss may detect the presence of conversational speech, but are often unable to understand conversational speech without amplification, due to insufficient audibility. In addition, most energy in speech is concentrated in the lower frequencies and then decreases in the higher frequencies, coinciding with the frequency region corresponding to the most hearing loss in adults. Thus, higher frequency speech information will often be inaudible for individuals with an average hearing loss in the moderate range or even the mild range. Because most consonant phonemes in speech are composed of weak, high-frequency energy, individuals with such high-frequency hearing losses

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

may have considerable difficulty understanding speech accurately although they are able to detect the presence of speech.

In addition to the loss of audibility, hearing impairment in the mild to moderately severe range is often accompanied by distortion of the acoustic signal. The detailed spectral, temporal, and intensive characteristics of the speech signal are processed differently in the impaired auditory system from in the normal auditory system. This is thought to result in considerable difficulty understanding a spoken message, particularly in challenging listening environments that include noise and reverberation (Plomp, 1986; Plomp and Duquesnoy, 1980).

Individuals with a severe hearing loss (71-90 dB HL PTA in the better ear) who do not use a hearing aid cannot detect the presence of conversational speech and may hear only shouted speech. These individuals may derive limited benefit from a hearing aid, depending on their previous experience with amplification.

A profound hearing loss typically is associated with extremely limited capacity to receive speech in the auditory mode. Individuals with a profound hearing loss may hear very loud or amplified sounds, but they often cannot benefit from a hearing aid for understanding speech. Hearing is not the primary communication channel for people with a profound hearing loss who do not use hearing aids or cochlear implants. Because of their limited ability to receive spoken language in the auditory-only mode, people with severe or profound hearing losses are candidates for admission to schools for the deaf, where numerous accommodations and other special services are available. For example, the application information for the National Technical Institute for the Deaf specifically states that the only audiological criterion for admission is a PTA score of 70 dB HL or greater in the better ear.

Speech Audiometry

Speech audiometry encompasses a range of measures that include assessment of speech thresholds, suprathreshold speech recognition in quiet, and suprathreshold speech recognition in noise. Although suprathreshold speech recognition measures are usually obtained for an auditory-only presentation mode (unisensory), they may also be obtained for an auditory + visual presentation mode (bisensory). Stimuli may be recorded or live, but only recorded stimuli can undergo standardization procedures. Recorded speech stimuli are routed through a speech audiometer to ensure accurate signal presentation levels. Standards governing the calibration of the speech signal through the speech audiometer

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

(American National Standards Institute, 1996) provide the RETSPL for speech signals.1

Speech Thresholds

The pure-tone examination assesses an individual’s detection thresholds for tones that encompass the range of speech sounds, from 250 to 8000 Hz. Validity of these pure-tone thresholds can be established by verifying that these thresholds actually reflect the ability to detect speech. To that end, a threshold level for speech is obtained to assess the validity of the pure-tone audiogram. Two types of speech thresholds may be measured: the speech detection threshold (SDT) and the speech recognition threshold (SRT). The SDT is the lowest intensity level for 50 percent detection of the speech signal. The SDT normally is obtained at a level that is 8-9 dB lower than the speech recognition threshold. The SRT is the minimum hearing level of a speech signal at which a listener correctly repeats 50 percent of the spoken message.

Usually the speech signals are spondee words,2 but they may be sentences. The SRT obtained with spondees corresponds with the three-frequency PTA (500, 1000, 2000 Hz) in individuals with normal hearing or reasonably flat audiograms, and with a two-frequency PTA (an average of the best two thresholds obtained between 500 and 2000 Hz) in individuals with sloping or rising audiometric configurations (Carhart, 1971; Fletcher, 1950), if the speech signal is calibrated according to the RETSPLs for speech (American National Standards Institute, 1996). The SRT is also useful for predicting the audibility of conversational speech

1  

At present, recommended procedures for recordings of speech materials are available in an appendix to the ANSI S3.6 standard. These recommended procedures specify that recordings shall provide a 1000 Hz calibration tone or a weighted random noise at the beginning of the recording, at the same level as the speech materials on the recording. In addition, the international standard, IEC 60645, indicates that speech level should be expressed as the equivalent continuous sound pressure level determined by integration over the duration of the speech signals with frequency weighting C. This indicates that specified levels of the speech stimulus are based on average levels. Some recorded speech recognition materials do not conform to these standard recording and calibration procedures. For example, the Veterans Administration recordings of speech materials use a calibration tone that reflects the peaks of the carrier phrase for each test on the recording. Normative data obtained with each speech test are therefore appropriate as a reference only if the test is presented using identical procedures for calibration and test level. In order to present a standardized test, the tester should be well informed about the correct calibration procedures and presentation levels for particular speech materials.

2  

Spondee words are two-syllable words with equal emphasis on both syllables, for example, “milkman” and “sidewalk.”

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

and the need for amplification (Hodgson and Skinner, 1981). Standard audiometric practice includes measurement of the SRT; the SDT is reserved for cases in which an SRT cannot be measured.

Several standard methods for measurement of SRT produce reliable threshold estimates with recorded speech materials. The method recommended by the American Speech-Language-Hearing Association (ASHA) is a descending technique in which the number of spondee words presented at each step is equivalent to the step size, for either 2-dB or 5-dB step sizes (American Speech-Language-Hearing Association, 1988; Tillman and Olsen, 1973). This technique is both reliable and valid (Beattie, Forrester, and Ruby, 1977; Wall, Davis, and Myers, 1984; Wilson, Morgan, and Dirks, 1973).

Measurement of speech recognition thresholds is useful for corroborating the validity of the pure-tone thresholds and is routinely conducted by most audiologists (Martin, Champlin, and Chambers, 1998). In addition, an estimate of the audibility of conversational speech without amplification can be made directly from the SRT. For example, a score in the better ear that approximates the level of average conversational speech (45-50 dB HL) indicates that everyday speech is minimally audible to the listener, and a score that exceeds 50 dB HL indicates that virtually all unamplified speech signals are inaudible to the listener unless the talker speaks loudly or approaches the listener more closely than the typical 1.0 meter distance of casual conversation. This information is useful for determining hearing aid needs and may be used to substantiate a claim for functional hearing impairment disability. Current SSA guidance prescribes testing of SRT (Social Security Administration, 2003, p. 24).

Suprathreshold Speech Recognition

Suprathreshold measures expressed as percentage-correct speech recognition scores indicate the clarity with which an individual receives and understands a spoken message. Although speech recognition scores are correlated with pure-tone thresholds, the correlation is typically not high enough to accurately predict one from the other for an individual, so speech recognition must be measured directly.

Most speech recognition tests are presented at levels well above the listener’s detection threshold to estimate maximum potential performance. These measures are used for multiple purposes: diagnosing the auditory site of a lesion, assessing potential benefit with amplification, assessing candidacy for a cochlear implant, and assessing everyday speech understanding.

SSA requires testing of “speech discrimination in quiet at a test presentation level sufficient to ascertain maximum discrimination ability”

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

(Social Security Administration, 2003, p. 24). The committee notes, however, that it is difficult to determine if maximum discrimination ability is assessed unless a complete performance-intensity function is obtained. Moreover, performance at the presentation levels needed to measure maximum performance does not necessarily reflect recognition of speech at typical conversational levels. Listener performance on speech recognition tests generally is not correlated with performance on self-assessment measures of hearing disability. The reason for this lack of agreement may be related to the wider range of listening conditions sampled on self-assessment tools than in the clinical setting, the influence of emotional reactions and personality variables on individual responses to the questionnaires, or the high presentation level of traditional speech tests that does not simulate everyday listening levels.

Assessment of suprathreshold speech recognition performance is relevant for SSA disability determination as an indicator of everyday speech understanding or as a potential indicator of speech understanding ability in specific listening conditions. While few studies have demonstrated the correlation between performance on specific speech recognition tests and performance on hearing-critical tasks in the workplace, theoretically there is face validity in utilizing speech recognition tests to predict communication skills on particular everyday tasks.

For example, a speech recognition test presented in quiet with few contextual cues and no visual cues is expected to predict speech understanding over the telephone. Performance on a speech recognition test presented in moderate noise levels probably correlates with on-the job communication in a typical noisy environment, such as a busy office. The closer the correspondence between the test method (stimulus materials, response mode, signal level, and presence of background noise) and everyday listening situations, the better the predictive value of the clinical test for actual communication performance in the employment setting ought to be.

Open-Set Tests

Monosyllabic Words. A variety of speech materials have been developed and evaluated for assessment of suprathreshold speech recognition. Monosyllabic word tests with free recall (referred to as open-set3 ) chal-

3  

A speech recognition test is considered to be “open-set” if the listener is required to give a free recall response without any specific response set. A speech test is “closed set” or “closed message” if the listener must choose from a limited set of known response alternatives.

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

lenge a listener’s ability to recognize discrete phonemes while deriving meaning from the stimulus. They are presented routinely in quiet by the vast majority of audiologists (Martin et al., 1998). Each standardized test involves presenting full lists of the recorded materials (usually 50 words) at the presentation level used in the standardization procedure. Performance is quantified by a percentage-correct score.

Examples of open-set monosyllabic word lists that have been standardized on a normal-hearing sample include Central Institute for the Deaf Test W-22 (CID W-22) (Hirsh et al., 1952), Northwestern University Auditory Test No. 6 original recordings (NU6) (Tillman and Carhart, 1966), and NU6 Veterans Administration (VA) compact disc recordings (Wilson, Zizz, Shanks, and Causey, 1990). The NU6 test has also been standardized with a sample of listeners with hearing loss (Tillman and Carhart, 1966). The standardization reports of these tests indicate the psychometric functions (performance versus intensity) and the interlist equivalence of the recorded tests. In addition, the variability of percentage-correct scores on open-set speech recognition tests is determined by the number of stimulus items in the test and the test score, as described by the binomial probability theorem (Thornton and Raffin, 1978). The application of this statistical principle indicates that the standard deviation of a test score is inversely proportional to the square root of the number of test items and is larger as the test score approaches 50 percent correct. Hence, one method to reduce variability in speech recognition testing is by presenting a larger number of items in the test.

We note several limitations of open-set monosyllabic word testing in quiet, as the single metric of speech understanding performance, for purposes of SSA disability determination. First, the monosyllabic word stimuli do not represent speech used in everyday communication because they do not contain contextual cues. Second, most recorded and standardized monosyllabic speech recognition tests contain only three or four equivalent lists; this number may not be sufficient for a full evaluation. Third, presentation of the stimuli in quiet does not represent the environment in some workplaces, which may be degraded by a background of steady-state noise or the speech of coworkers.

Two paradigms have been used to assess monosyllabic word recognition performance in noise. The first is presentation of the stimuli in a fixed noise level, using either broadband noise or a competing message (Wilson et al., 1990) and determining the percentage-correct score at several fixed stimulus levels. For listeners with normal hearing, the psychometric functions so determined are shifted toward higher signal levels in both broadband noise and competing message conditions relative to the functions in quiet. The signal-to-noise (S/N) ratio at which listeners achieve a 50 percent correct score can be derived from these psychometric functions. For

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

example, Wilson et al. (1990) assessed performance with fixed noise levels of 60 dB SPL and varied speech levels in 4-dB steps between 52 and 88 dB SPL for the broadband noise condition, and between 40 and 76 dB SPL for the competing message condition. For listeners with normal hearing, 50 percent correct scores were obtained in broadband noise at a S/N ratio of +11 dB and in competing-message noise at a S/N ratio of -4dB with the VA recording of NU6 (Wilson et al., 1990).

The second paradigm adaptively varies the level of a background competition following each response, to measure the S/N ratio at which a listener achieves a criterion level of performance (Dirks, Morgan, and Dubno, 1982). The procedure is repeated at four signal presentation levels that span the range of average-to-loud conversational speech and amplified speech (60-96 dB SPL). Performance data indicate that most listeners with hearing loss require more favorable S/N ratios than those required by listeners with normal hearing to achieve 50 percent recognition performance, although this varies somewhat with signal presentation level. In both procedures, the normative performance data are specific to the recorded stimuli, noise, presentation levels, and calibration procedures reported.

One emerging issue in assessing monosyllabic word recognition is the role of cognitive and linguistic capabilities that influence activation and selection of a target word from long-term memory. The neighborhood activation model of lexical processing suggests that words are recognized in relation to other phonemically similar words (Luce and Pisoni, 1998). Three factors in particular appear to affect speech recognition performance: the number of phonemically similar words for a target word (“neighborhood density”), the average frequency of occurrence of words that are phonemically similar (“neighborhood frequency”), and the frequency of occurrence of a particular word in the language (“word frequency”). Studies indicate that these structural factors between words influence monosyllabic word recognition performance of listeners with normal hearing and with hearing loss when the words are presented in both quiet and noise (Dirks, Takayanagi, Moshfegh, Noffsinger, and Fausti, 2001). A test based on these principles, the Lexical Neighborhood Test (LNT) (Kirk, Pisoni, and Osberger, 1995), has been developed for assessment of children, but no comparable test is available for evaluation of adults.

Sentence Tests. Speech recognition tests that incorporate the use of everyday sentences are desirable for estimating the level of performance in daily communication situations. However, everyday sentences inherently contain contextual cues for identification of individual words. Thus, a concern with the use of these materials is the extent to which syntactic,

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

semantic, and lexical contextual cues (see Appendix A for definitions) influence overall performance, and the interaction of these factors with a listener’s knowledge of the language. Such an interaction might mean, for example, that standardization on subjects who are native speakers of the test language would yield norms that are inappropriate for nonnative speakers. A second issue is the method of scoring sentence recognition performance, although most contemporary sentence tests assess accuracy based on recognition of keywords. A third issue is whether the test has been standardized in quiet, noise, or both environmental conditions. For SSA disability determination, a sentence test standardized in both quiet and noise would be particularly valuable in estimating functional hearing ability for many job-related tasks.

Sentence recognition tests that have been developed for presentation in quiet include the CID everyday sentences (Silverman and Hirsh, 1955) and the City University of New York (CUNY) Sentences (Boothroyd, Hanin, and Hnath, 1985). Recorded versions of both of these materials are available. The CID everyday sentences contain 10 lists of sentences with 50 keywords per list. Individuals with mild or moderate hearing losses obtain excellent scores on this test; as a consequence, these materials are used most often in assessment of listeners with profound hearing loss (Owens, Kessler, Raggio, and Schubert, 1985; Tyler et al., 1985). The CUNY Sentences (Boothroyd, Hanin, and Hnath, 1985a) are a popular corpus of everyday sentences that are used for assessment of individuals with profound hearing loss. Data reporting listeners’ psychometric performance and test-retest reliability on these two tests are not available.

Several sentence recognition tests in noise have been developed. The Speech Perception in Noise (SPIN) test (Kalikow, Stevens, and Elliott, 1977) assesses a listener’s recognition of keywords embedded in sentences with controlled word predictability: half of the sentences on each list include semantic contextual cues (high-probability sentences) and the other half are semantically neutral (low-probability sentences). The identical keywords are used in high-probability and low-probability sentences appearing in different lists. The revised version of this test (R-SPIN) (Bilger, Nuetzel, Rabinowitz, and Rzeczkowski, 1984) has been standardized with a background of multitalker babble at a signal to babble (S/B) ratio4 of +8 dB. Calibration tones are equivalent in root-mean-square (rms) level to the average levels of the sentences and the babble. The eight sentence lists are equivalent, and test-retest reliability is high among lis-

4  

When the background noise is speech babble, the term S/B or signal-to-babble ratio may be used with the same meaning as S/N or signal-to-noise ratio.

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

teners with hearing loss (Bilger et al., 1984). Scores on the low-probability items reflect a listener’s ability to recognize the acoustic and phonetic characteristics of the speech signal; scores on the high-probability items indicate the extent to which a listener can utilize semantic contextual cues in addition to acoustic or phonetic information.

The Connected Speech Test (CST) (Cox, Alexander, and Gilmore, 1987; Cox, Alexander, Gilmore, and Pusakulich, 1988) presents pairs of equivalent passages containing keywords used for scoring. A competing speech babble is presented during the test at two S/B ratios. Percentage-correct scores are derived for each passage and are transformed to rational arcsine units (rau) (Studebaker, 1985). Assessment of listeners with normal hearing and with hearing loss on this test demonstrates that it has high content validity, good sensitivity, and a large number of equivalent forms.

An abbreviated test of speech in noise (SIN test) (Etymotic Research, 1993) presents high- and low-level sentences to listeners at four S/B ratios. A percentage-correct score is calculated from the 25 keywords appearing in the five sentences for each condition. In addition, the S/B ratio corresponding to a 50 percent correct score can be derived. The original SIN test did not contain equivalent lists, and its level of difficulty was inadequate for listeners with normal hearing and hearing loss (Bentler, 2000). Revisions to the SIN test (R-SIN) (Cox, Gray, and Alexander, 2001) have improved the equivalence of the different lists and the sensitivity of the test for identifying changes in performance, but it requires a longer administration time.

The QuickSIN test was developed recently by Etymotic Research, Inc. (Etymotic Research, 2001) to assess the S/N ratio loss in a one-minute test. The test consists of one list of six sentences with five keywords per sentence and a background noise of four-talker babble. The test is presented at six prerecorded S/N ratios (25, 20, 15, 10, 5, and 0). Raw test results are reported as the S/N ratio at which the listener achieves a 50 percent correct score (the SNR-50), which is then compared to the normal SNR-50 to derive the S/N ratio loss. This loss reflects the increase in S/N ratio required by the listener with hearing loss to achieve a 50 percent correct score, relative to listeners with normal hearing. S/N ratio loss scores are categorized according to degree of severity: normal to near-normal (S/N ratio loss = 0-2 dB); mild (S/N ratio loss = 2-7 dB); moderate (S/N ratio loss = 7-15 dB); and severe (S/N ratio loss > 15 dB). The QuickSIN test is used primarily as an aid to selecting appropriate amplification and as a guide to counseling individuals regarding the potential benefit of different amplification options. The validity and reliability of the QuickSIN test for listeners with normal hearing or with hearing loss have not been reported in the literature.

The Hearing in Noise Test (HINT) (Nilsson, Soli, and Sullivan, 1994)

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

assesses a listener’s recognition of everyday sentences in quiet and noise. The standardized HINT employs an adaptive technique for adjusting stimulus level based on the accuracy of the listener’s recall of short, individual sentences, to measure the sentence SRT. Reliability of threshold estimates with the HINT presented in quiet has been demonstrated, with a standard deviation of difference scores of 1.39 dB (Nilsson et al., 1994). The standardized HINT also involves measurement of the sentence SRT in speech spectrum noise presented at a fixed level of 72 dBA.5 An adaptive procedure is used to estimate the S/N ratio for a criterion level of performance. For the HINT measured in noise, the mean S/N ratio for listeners with normal hearing is -2.92 dB; this value indicates that people with normal hearing correctly repeat 50 percent of the sentences at a speech level that is less intense than the fixed noise level. A higher S/N ratio on this measure indicates that a listener requires a higher signal level to achieve 50 percent correct recognition, thus reflecting poorer performance. The repeatability of the sentence SRT in noise is high (Nilsson et al., 1994). The sentence SRT in noise demonstrates a listener’s ability to understand speech in noisy environments, such as when operating a vacuum cleaner or attending a small party. Thus, it is viewed as an alternate procedure to other speech recognition measures in noise.

Although the HINT test was developed as a measure of SRT in quiet and in noise, the stimulus materials are often presented at a suprathreshold level to assess a percentage-correct score in quiet or in noise at fixed speech and noise levels. While this application of the HINT may be appealing, there are no data on the interlist equivalence of the HINT, test-retest reliability, or psychometric functions for listeners with normal hearing and with hearing loss at suprathreshold levels. Such data would be important for standardizing the HINT for suprathreshold presentation.

Closed-Set Tests

A wide range of closed-message speech recognition tests—in which the items are limited to a set known to the listener—have been developed over the years. In these tests, a target stimulus is presented and the listener’s task is to select the stimulus from a closed set of choices. The number of choices dictates the guess rate for a particular test (e.g., 25 percent guess rate for a four-choice response alternative, 17 percent guess

5  

dBA refers to a reading obtained from a sound-level meter using the A-weighting scale, which reduces the importance of the low frequencies; it is used for many noise measurements.

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

rate for a six-choice response alternative). Individual performance is higher on a closed-set speech recognition test than on an open-set test for the same speech stimuli.

Closed-message tests are available with a wide range of stimulus items. Nonsense syllable tests present single syllables in a consonant-vowel or vowel-consonant format. Because the stimuli are not meaningful lexical items, listener performance is thought to reflect perception of the acoustic cues of speech rather than knowledge of the language. The Nonsense Syllable Test developed by researchers at the City University of New York (CUNY NST) (Resnick, Dubno, Hoffnung, and Levitt, 1975) is an example of a standardized, closed-set test that uses nonsense syllables. This test has excellent interlist equivalence and test-retest reliability (Dubno and Dirks, 1982). It permits assessment of the specific consonant phonemes that a listener with hearing loss can identify as well the frequency of occurrence of particular consonant confusions (Dubno, Dirks, and Langhofer, 1982).

Monosyllabic words have also been used as the stimuli for closed-message tests. One example is the California Consonant Test (Owens and Schubert, 1977), which was developed to reveal the perceptual problems in speech recognition of individuals with high-frequency sensorineural hearing losses. The test assesses an individual’s ability to identify monosyllabic words with initial or final fricatives, sibilants, and plosives; these phonemes are often difficult to perceive for individuals with high-frequency sensorineural hearing loss. The response alternatives are chosen to be highly confusable with the target stimuli, so this test may reveal subtle difficulties in speech perception among individuals with selective high-frequency hearing loss that are not revealed on standard open-set monosyllabic word tests.

The Synthetic Sentence Identification test (SSI) (Speaks and Jerger, 1965) is an example of a closed-set sentence test. The sentence-length stimuli for this test were chosen to follow an approximation to the syntactic order of words in sentences, although the sentences are not meaningful. Listeners are asked to select each sentence they hear from a list of 10 sentences. This test is presented with a competing message either in the stimulus ear or the opposite ear, at varying S/N ratios. Abnormally poor performance on this test provides a diagnostic indication of the side and site of a retrocochlear lesion.

Listener Performance on Speech Recognition Tests

The performance of listeners with hearing loss on speech recognition tests is affected by the degree of hearing loss, the configuration of hearing loss, the site of the lesion, the listener’s knowledge of the language, the

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

speech recognition materials, the speech presentation level, and the listening environment (quiet or noise). Individuals with normal hearing sensitivity or conductive hearing loss generally exhibit excellent performance (90-100 percent correct) on monosyllabic word tests and sentence tests presented in quiet. Listeners with sensorineural hearing loss show a range of scores, from 0 to 100 percent correct.

Individual performance on a variety of speech recognition measures can be predicted with ANSI standards, including the articulation index (AI) (American National Standards Institute, 1969) and the speech intelligibility index (SII) (American National Standards Institute, 2002b). These predictions are based on the audibility of the speech signal, the relative intensities of the speech signal and background noise (the S/N ratio) in different frequency bands, and the importance of different frequency bands for accurate performance on a particular speech recognition test. According to both the AI and the SII, predicted performance generally is inversely proportional to the degree of hearing loss, for individuals with hearing loss attributed to cochlear lesions. People with hearing loss primarily affecting the high frequencies may demonstrate poor speech recognition scores at conversational speech levels, especially in noise (Suter, 1985), an observation attributed to the importance of high-frequency consonant information for understanding speech and the direct masking of low frequency cues by background noise. AI/SII predictions, however, do not always completely predict actual performance for listeners with hearing loss, particularly in noise (Dirks, Bell, Rossman, and Kincaid, 1986). For example, SII overpredicted average recognition scores for the low-probability items of the SPIN test and for the Nonsense Syllable Test by approximately 17-18 percent at +8 dB S/N ratio in a group of older listeners with hearing loss (Hargus and Gordon-Salant, 1995).

Speech Recognition with Auditory and Visual Cues

The preceding section discussed assessment of speech recognition for signals presented in the auditory modality without amplification. There are circumstances in which it is desirable to assess recognition of speech presented in both auditory and visual modalities. This type of presentation simulates a face-to-face conversation, permitting the receiver to take advantage of visual cues from the face to aid perception. A comparison of performance in the auditory modality to performance in the combined auditory and visual modalities indicates the magnitude of benefit the receiver obtains from speech-reading in addition to acoustic information.

One key issue in assessing auditory + visual speech reception is the use of standardized materials. The importance of audio recordings of speech materials for assessing speech recognition in the auditory modal-

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

ity was discussed previously. Similarly, video recordings are essential for assessing speech recognition when visual cues are presented, because there is wide variability in the extent to which different talkers provide visible speech cues through lip and jaw movement and expressive facial movements (which may be obscured by such things as facial hair). Tests for adults include the Iowa Sentence Test (Tyler, Preece, and Tye-Murray, 1986) and the CUNY Sentences (Boothroyd et al., 1985). The CUNY Sentences are topic-related sentences consisting of 6 lists of 12 sentences each. Performance is scored for keywords correct per list. Performance reliability estimates are predicted from the binomial probability theorem based on 25 independent items per list (Boothroyd, Hnath-Chisolm, Hanin, and Kishon-Rabin, 1988). Audiotape, videotape, and video laser disk recordings are available from the authors. The Iowa Sentence Test also uses everyday sentences as the stimulus materials. This test has been recorded on video laser disk, although equipment for playback of laser disks is no longer available for purchase. Presentation of these audiovisual materials involves routing the visual speech signal through a video monitor under controlled lighting and distance conditions. The auditory signal is routed through an audiometer for control of signal level. Normative data have not been published describing validity, reliability, or performance-intensity functions in combined auditory + visual modalities for these materials.

The appendix of the ANSI standard for the speech intelligibility index (American National Standards Institute, 2002b) suggests a method for calculating the audiovisual SII, Sav, to approximate performance of listeners who are not specifically trained in speech-reading under optimal viewing conditions. The prediction uses the formula

Sav = b + cS,

where S is the audio-only calculated SII, and b and c are constants. For S ≤ 0.2, b and c are 0.1 and 1.5, respectively. For S > 0.2, b and c are 0.25 and 0.75, respectively. Using values from the transfer functions derived for the NU6 Auditec audio recording (Schum, Matthews, and Lee, 1991; Studebaker, Sherbecoe, and Gilmore, 1993), a value of S = 0.3 is associated with an NU6 score of approximately 50 percent correct and an auditory + visual Sav = 0.475, yielding a predicted NU6 score of 80 percent correct. When S = 0.2, the NU6 score would be expected to be about 20 percent, but with the addition of vision Sav = 0.4, and the predicted NU6 score would be about 65-70 percent correct.

Actual auditory + visual performance may be substantially different from these estimated predictions, although data comparing predicted to actual performance are not available. A number of factors influence a person’s unaided speech-reading performance, including familiarity with

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

the talker (Lloyd and Price, 1971), visual acuity (Hardick, Oyer, and Irion, 1970), knowledge of the language, live versus recorded materials, and degree of hearing loss (Erber, 1979). For example, word recognition performance improves 19-28 percent for individuals with severe hearing loss with the addition of visual cues, but improves only 1-15 percent for individuals with profound hearing loss (Erber, 1975). Although reception of everyday sentences is usually optimal for audition + vision conditions (Tye-Murray, 1998), some individuals with severe or profound hearing losses may exhibit very poor performance (10 percent correct) for recognition of everyday sentences presented with auditory + visual cues (Sims and Hirsh, 1982). These unaided performance levels do not reflect the combined benefit of speech-reading and use of amplification or a cochlear implant in suitable candidates.

Multicultural and Multilingual Issues in Evaluation of Speech Recognition

U.S. society is becoming increasingly multicultural and multilingual. Individuals seeking audiometric evaluation may have no knowledge of the English language or may have limited fluency in English. Presentation of a standardized English speech recognition test to these individuals is problematic for several reasons. First, a lack of familiarity with the vocabulary is known to reduce performance on a speech recognition test. As a result, nonnative speakers of English obtain lower scores on English speech recognition tests than do native speakers of the language (Gat and Keith, 1978). Second, listeners whose first language is not English perceive individual consonant and vowel phonemes differently than native speakers of English (Danhauer, Crawford, and Edgerton, 1984). Finally, nonnative speakers derive less meaning from sentence-length materials than native speakers of English, in part because of differences between the overall rhythmic pattern of English and that of many other languages. For bilingual speakers, the age of second language acquisition is an important factor influencing proficiency in English. This is particularly apparent on speech recognition tests in noise: nonnative adult speakers of English who learned English before age 6 perform better in noise than adult listeners who learned the language after puberty, even though all listeners achieve nearly perfect performance in quiet (Mayo, Florentine, and Buus, 1997).

Despite these obstacles, it remains desirable to evaluate a listener’s speech recognition performance during an audiometric assessment. A number of alternative materials and methods have been recommended for evaluating nonnative speakers of English. The preferred strategy is to present a speech recognition test for which recordings are available in

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

TABLE 3-2 Speech Recognition Materials Available in Languages Other Than English

Test Type/Language

Source

Bisyllables

Spanish

Auditec of St. Louis

Luganda

Nsamba, 1979

Monosyllabic Words

Spanish

Auditec of St. Louis

Russian

Aleksandrovsky, et al., 1998

Hearing in Noise Test (HINT)

Spanish

Cochlear Corporation, 2002

French

Laroche, et al., University of Ottowa, 2002

Mandarin

Wong, University of Hong Kong, 2002

Cantonese

Wong, University of Hong Kong, 2002

Japanese

Kubi, Osaka University, 2002

Picture Identification

Spanish

VA recordings: McCullough and Wilson, 1998

Trisyllabic Words (for SRT)

Spanish

Auditec of St. Louis

Bisyllabic Words (for word recognition)

Spanish

Auditec of St. Louis

Synthetic Sentence Identification (SSI) Test

Spanish

Auditec of St. Louis

NOTE: Entries are publications, vendors, or contact names for obtaining each set of materials.

the listener’s native language. A listing of speech recognition tests that have recordings available in languages other than English is shown in Table 3-2. Lists of Spanish bisyllabic words tend to yield performance scores by Spanish speakers that are comparable to English monosyllabic word recognition scores by native speakers of English (Weislander and Hodgson, 1989). Listener responses can be scored phonetically with reasonable accuracy even if the audiologist is unfamiliar with the test language (Cakiroglu and Danhauer, 1992; Cokely and Yager, 1993). Alternatively, a closed-set, picture-pointing task can be used for test administration in a language unfamiliar to the audiologist. This method removes any potential bias or scoring error. Tests that incorporate picture-pointing tasks on paper and computer are available in Spanish and Russian (Aleksandrovsky, McCullough, and Wilson, 1998; Comstock and Martin, 1984; McCullough, Wilson, Birck, and Anderson, 1994; Spitzer, 1980).

There are many languages for which there are no recorded versions of speech recognition tests. One recommendation is to assess the speech recognition threshold using digit pairs in place of spondee words, be-

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

cause new language learners acquire knowledge of digits relatively early, and audiologists can easily score the responses (Ramkissoon, Proctor, Lansing, and Bilger, 2002). This digit-SRT test can be administered to listeners from any language, and results appear to be valid, based on high correlations between digit-SRTs and the PTA measured for nonnative English speakers (Ramkissoon et al., 2002). A comparable test for assessing suprathreshold speech recognition in listeners with various linguistic backgrounds has not been developed.

Acoustic Immittance Measures

Acoustic immittance measures are a series of electrophysiologic tests that assess the integrity of the middle ear system and the structures comprising the acoustic reflex pathway. Acoustic immittance measures are administered routinely as part of the standard audiometric evaluation (Martin et al., 1998), using commercially available acoustic immittance systems calibrated according to ANSI standards (American National Standards Institute, 2002a). Interpretation of the acoustic immittance test results, in conjunction with the audiogram, aids in determining the site of the lesion associated with a hearing loss. The three basic subtests of the acoustic immittance battery are tympanometry, acoustic reflex thresholds, and acoustic reflex adaptation.

Tympanometry

Tympanometry is an assessment of the ease of acoustic energy transfer (acoustic admittance) through the middle ear system, as a function of air pressure. In the normal middle ear system, energy transfer of the middle ear, as measured at the plane of the tympanic membrane, is maximal at atmospheric pressure (0 dekaPascals, or daPa) and is minimal at air pressures that produce a stiffening of the middle ear system (air pressures remote from 0 daPa, such as +200 daPa or –200 daPa).

Tympanometry is performed by presenting a probe tone to the ear canal and measuring the acoustic admittance (in mmhos, an expression of the ease of energy flow that has a reciprocal relationship with impedance as measured in acoustic ohms) of this tone, as the air pressure presented to the sealed ear canal varies from positive to negative (usually in the range +200 daPa to -400 daPa). The standard probe tone frequency is 226 Hz, although many additional probe frequencies can be presented. The resulting tympanogram is a pressure-admittance function that depicts the admittance characteristics of the tympanic membrane and middle ear system of the test ear. Three parameters of the tympanogram can be quantified: peak admittance (Peak Y), tympanometric width (TW), and equiva-

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

TABLE 3-3 Norms for Peak Admittance (Y), Tympanometric Width (TW), and Equivalent Volume (Vec)

Age Group

Y (mmho)

TW (daPa)

Vec (cm3)

Adults

Mean

0.79

77

1.36

90th percentile range

0.30–1.70

51–114

0.9–2.0

Children

Mean

0.52

114

0.58

90th percentile range

0.25–1.05

80–159

0.3–0.9

 

SOURCE: Margolis and Hunter (1999).

lent volume (Vec) (American Speech-Language-Hearing Association, 1990). Normal values for each of these parameters are shown in Table 3-3.

Peak admittance is the admittance value observed at the peak point on the tympanogram, in mmhos. Abnormally low values indicate a stiffening pathology of the middle ear, including otitis media with effusion and otosclerosis. Excessively high peak admittance values are consistent with a hypermobile middle ear system, such as ossicular discontinuity or scarring of the tympanic membrane.

Tympanometric width is the pressure interval on the tympanogram corresponding to a 50 percent reduction relative to the peak height. It is an indication of the shape of the tympanogram. A flat tympanogram, often associated with otitis media with effusion, produces an abnormally wide TW.

Equivalent volume is an indication of the volume of the external auditory canal. It is obtained at a pressure that minimizes the admittance of the middle ear (e.g., +200 daPa or -400 daPa). Thus, the height of the tympanogram at one of these values is the equivalent volume of the external auditory canal. Most acoustic admittance systems provide the Vec in a printout accompanying the tympanogram. Abnormally high Vec coupled with a flat tympanogram is observed in cases of a perforation of the tympanic membrane.

In summary, abnormal tympanograms are observed for a variety of pathological conditions affecting the tympanic membrane and middle ear, including otitis media with effusion, otosclerosis, ossicular discontinuity, tympanic membrane perforation, and a scarred (monomeric) tympanic membrane. A normal tympanogram indicates the presence of a normally functioning middle ear system, either with or without normal hearing.

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Acoustic Reflex Thresholds

The acoustic reflex is a contraction of the stapedius muscle of the middle ear in response to loud sound. The pathways for this reflex ascend from the peripheral auditory system to the brainstem and then descend both ipsilaterally and contralaterally, so presentation of a loud sound in one ear results in bilateral contraction of the stapedius muscles. This contraction stiffens the middle ear system, causing a reduction in the transfer of low-frequency energy.

The clinical procedure for assessing the acoustic reflex threshold involves presenting a low-frequency probe tone (i.e., 226 Hz) to one ear, presenting high-intensity signals to the same or the other (contralateral) ear, and monitoring a decrease in the acoustic admittance of the probe tone in response to the presentation of the high-level signal. The minimum stimulus level that results in an observable decrease in acoustic admittance is defined as the acoustic reflex threshold. Acoustic reflex thresholds are usually measured from 500-2000 Hz, in both ipsilateral and contralateral modes, for each ear.

In listeners with normal hearing, the acoustic reflex threshold is elicited at levels approximating 85 dB HL (+/− 10 dB). The acoustic reflex is absent if the signal doesn’t reach the cochlea with sufficient intensity, if there is damage affecting any of the structures along the acoustic reflex pathway, or if there is a stiff middle ear system in the probe ear. Examples of conditions in which the acoustic reflex is absent include conductive hearing loss of 25 dB HL or greater in the stimulus ear, conductive hearing loss of 10 dB HL or greater in the probe ear, sensorineural hearing loss exceeding 75 dB HL in the stimulus ear, a lesion of the facial nerve in the probe ear, and a lesion in the auditory brainstem affecting the crossing pathway of the acoustic reflex arc. The acoustic reflex may also be absent with a lesion of the vestibulocochlear nerve in the stimulus ear, depending on the extent of the lesion. The acoustic reflex is expected to be present in cases of mild, moderate, or moderately severe sensorineural hearing loss associated with a cochlear lesion. The acoustic reflex threshold generally increases as a function of pure-tone threshold in these cases (Silman and Gelfand, 1981).

Otoacoustic Emissions

Otoacoustic emissions (OAEs) are noninvasive, objective measures of cochlear functioning. The cochlea contains two distinct types of sensory hair cells, outer and inner hair cells. When the inner hair cell senses vibration (sound), it will send the information by releasing neurotransmitters at its base to initiate activity in the primary auditory nerve. The outer hair

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

cell also will sense sound as vibration, but it does not send information to the central nervous system. Rather, the role of the outer hair cell is to amplify the vibration at specific regions of the cochlea by actually expanding and contracting in response to sound (Brownell, Bader, Bertrand, and de Ribaupierre, 1985; Russell and Sellick, 1978). This added vibration is imparted to adjacent inner hair cells, thereby increasing the overall sensitivity of the auditory system to weak sounds. Otoacoustic emissions are produced when some of the energy from the outer hair cells is propagated through the fluids in the cochlea back to the middle ear and tympanic membrane to create a sound wave in the external ear canal (Kemp, 1986).

OAEs are highly dependent on the outer hair cells (Schrott, Puel, and Rebillard, 1991), which are generally more vulnerable to disease and damage than inner hair cells. Therefore when OAEs are normal, it is presumed that the inner hair cells are functioning normally as well. Consequently, when an OAE is recorded, one can usually assume that hearing thresholds are 30-40 dB HL or better. In addition, frequency regions of normal and abnormal outer hair cell function can be predicted by patterns of OAE response. In this way the OAE can aid in objective measures of hearing (but see caveat below).

Two basic types of OAE are used clinically and both are acceptable for use in infants, children, and adults. Each type requires a small probe to be placed into the ear canal. The probe contains one or two sound production devices (transducers) as well as a microphone to record the emission itself. The transient-evoked OAE generally uses a click stimulus, but occasionally a tone burst is used. The response is spread over about 10 ms time due to travel time in the inner ear. Responses from many stimuli are averaged to reduce noise. Several types of statistical measures are used to determine the presence of a reliable response.

The other type of OAE used clinically is the distortion-product OAE, which is measured in response to two tones. The interaction of the two tones produces distortion, creating a third tone at a frequency predictable from the eliciting tones. A computer presents the primary tones and analyzes the microphone output for the presence of the distortion product. For a thorough review of clinical applications of both types of OAEs, see Prieve and Fitzgerald (2002).

It is important to keep in mind that an OAE can be present in narrow regions of good outer hair cell function and should be interpreted in a frequency-specific manner. As such, the OAE can help to fill in information from specific frequency regions. Caution should be used in overinterpretation of very narrow, low-amplitude regions of OAE, which can be spurious noise. Also, OAEs are not strong in low-frequency regions (1000 Hz and below) in infants and toddlers due to physiological

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

noise, and thus absent OAEs in low-frequency regions should not be given great weight in interpretation.

OAE presence indicates good hair cell function and generally indicates that the hearing thresholds should be better than 30-40 dB. OAEs, however, cannot be used to determine exact hearing thresholds. In contrast, the absence of an OAE can be due to a variety of causes, from middle ear dysfunction to sensorineural disorders producing hearing loss of any degree. The absence of an OAE alone should not be interpreted as indication of significant hearing loss. There are conditions in which the presence of an OAE alone does not ensure normal hearing sensitivity. Disease that spares the cochlea and impairs function in the auditory nerve or low brainstem (for example acoustic neuroma or auditory neuropathy) can also cause significant hearing loss. The OAE therefore should not be tested in isolation but must be included in a battery of tests for accurate interpretation. Nevertheless, measurement of OAEs provides a quick, noninvasive view of the functioning of the inner ear. Because of the value of these measures, the evaluation of OAEs is routine in many diagnostic audiology settings.

Auditory Evoked Potentials

Auditory evoked potentials (AEPs) are recordings of neural activity evoked by sound. The AEP is collected using surface electrodes placed on the scalp and near the ear. Computer-generated sounds are presented to subjects via earphones, and each presentation triggers a synchronized recording of neural activity. The responses to many stimuli are averaged in a manner time-locked to the stimulus to reduce the contribution of nonauditory generators, such as random muscle or brain activity.

AEPs occur within fractions of a second following stimulation. The earliest activity, known as the auditory brainstem response or ABR, has a latency of 0-20 ms depending on the age of the subject and the nature of the sounds used to elicit the response. This response is generated in the auditory nerve and brainstem auditory pathway. Other evoked potentials include the middle latency response (MLR), generated in the thalamus and primary auditory cortex, with a latency of 20-100 ms, and the late cortical response (LCR), generated in the auditory cortex and association areas, with a latency of 100-250 ms. While all of these AEPs can be used to predict hearing threshold levels, the one used most commonly in the United States is the ABR. In contrast, most of the audiology literature from other parts of the world supports the use of late responses for threshold estimation in adults (Coles and Mason, 1984; Hone, Norman, Keogh, and Kelly, 2003; Hyde, Alberti, Matsumoto, and Yao-Li, 1986; Tsui, Wong, and Wong, 2002). Most recently, a variation of standard evoked potential

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

technique, referred to as auditory steady-state response (ASSR), has been used successfully to predict frequency-specific hearing thresholds.

The ABR is a recording of activity generated in the auditory nerve and subsequent brainstem auditory pathways. Short-duration sounds are presented to the ear through earphones. These generate a series of neural impulses in the brainstem auditory pathway.

The ABR requires that the electrode-recorded activity be averaged following presentation of hundreds of stimuli. A computer time-locks the recording of neural activity to the onset of stimuli and creates an average neural response. The averaging process allows the minute (nanovolt level) changes in electrical potentials in the brainstem that occur in response to sound to be distinguished from other electrical activity in the brain and muscles of the head and neck.

The ABR, when elicited by clearly audible stimuli, produces a consistent series of peaks labeled with Roman numerals I-V. In infants, only peaks I, III, and V are visible. As the stimulus level approaches the threshold of hearing, ABR peaks diminish in amplitude and number and increase in latency (see Figure 3-2). The lowest stimulus level that will produce a visually detectible ABR is termed the ABR threshold. The ABR threshold in most instances is a very good indicator of the hearing threshold that would be determined by standard audiometric techniques (Sininger, Abdala, and Cone-Wesson, 1997; Stapells, Gravel, and Martin, 1995). In the absence of neurological disease, the ABR threshold can be used to accurately predict audiometric thresholds. This is a standard technique used to predict degree and configuration of hearing loss in infants under 6 months of age and in uncooperative toddlers. It can also be used to predict hearing thresholds in adult patients who cannot or will not respond to standard audiometric testing procedures. ABRs should not be used in isolation; rather, they should be part of a test battery including otoacoustic emissions, middle ear assessment, and some observation of behavior in response to sound (see Chapter 7). In addition, when the auditory nerve or low brainstem is specifically impaired, the ABR threshold may not be an accurate indicator of hearing threshold (Sininger and Oba, 2001).

The stimuli used for ABR testing are of different types. Clicks are broadband stimuli that are particularly good for eliciting an ABR because they excite many cochlear elements and neurons almost simultaneously. ABR thresholds determined using click stimuli correspond closely to the average hearing levels of an audiogram (Sininger and Abdala, 1998). Frequency-specific tone bursts (short-duration tones created with slow rising onset and offset ramps) are the best ABR stimuli for the purpose of determining the degree and configuration of hearing loss and for predicting the specific thresholds on an audiogram. Figure 3-2 shows a typical ABR

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

FIGURE 3-2 Infant auditory brainstem response to 4000 Hz tone burst.

intensity series recorded from an infant with normal hearing in response to 4000 Hz tone bursts. Generally, tone bursts in the frequency range between 500 and 4000 Hz provide adequate information for accurate prediction of the hearing thresholds in subjects of any age.

Subjects must cooperate for the ABR evaluation by reclining and remaining nearly motionless in a darkened, sound-treated room during the test. Infants generally are tested during natural sleep; toddlers may need mild sedation for a competent evaluation. Older children and adults are tested in a quiet dark environment while reclined and are encouraged to sleep during the evaluation. The test requires that a minimum of three electrodes be attached to the scalp after mild skin abrasion for adequate connection. The subject is asked to wear headphones or use foam insert plugs connected to transducers. A full evaluation may take 15 minutes to 2 hours, depending on results and degree of cooperation. ABRs can be

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

elicited with bone-conducted stimuli when needed for differential diagnosis of conductive hearing loss or evaluation of ears with closed ear canals (Lasky and Yang, 1986; Stuart, Yang, and Green, 1994; Yang and Stuart, 1990; Yang, Rupert, and Moushegian, 1987).

The auditory steady-state response (ASSR), previously known as the steady-state evoked potential (SSEP), is another way of objectively assessing frequency-specific responses. In simple terms, this technique uses pure-tone (carrier) stimuli that are modulated at an appropriate amplitude with another tone at an appropriate modulation frequency. For infants and children, the appropriate modulation frequency range is about 80-100 Hz. Long segments of these stimuli are presented and ongoing electroencephalographic activity is sampled and analyzed in the frequency domain. When the neural activity shows a preference for the modulation frequency over other frequencies in the analysis, it is assumed that the auditory system is responding to the carrier frequency. A response is determined statistically by highly coherent phase in repeated measurements at the target frequency, or by significantly greater amplitude of modulation spectral components than surrounding frequencies, or both.

ASSR detection is completely automated. In addition, it can be quite fast as a clinical measure. Picton has shown (Picton et al., 1998), by using a variety of modulation frequencies, that up to four stimuli can be tested in each ear simultaneously. This technique has been shown to be of value in assessing aided hearing thresholds in the sound field (Picton et al., 1998). Hearing thresholds have been estimated to be within about 10-15 dB of thresholds obtained by standard audiometric techniques in adults with normal hearing and hearing loss using the multifrequency ASSR (Dimitrijevic et al., 2002).

One reservation about the use of ASSR for measurement of hearing is the lack of good data on infants and children (Stapells et al., 1995). Rickards et al. (1994) have found that normally hearing infants may not have a reliable response below about 40 dB. This would make it impossible to distinguish between mild hearing loss and normal hearing, a distinction that is critically important for determination of amplification needs. Perez-Abalo and colleagues (2001) have shown that, although they were able to determine hearing loss in the severe and profound range, in general, there was only fair agreement between ASSR thresholds and hearing levels in children with hearing loss. Her data also show that ASSR was unable to determine hearing levels below 40 to 50 dB nHL in the children at any frequency. At this time it would be not be prudent to recommend the use of ASSR to determine hearing loss in infants and young children, especially those with mild and moderate hearing loss.

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

Assessment When Exaggerated Hearing Loss Is Suspected

On occasion, an individual may feign a hearing loss or exaggerate one during routine audiometric assessment for financial compensation, to gain attention, or to acquire some form of special treatment. The terms “pseudohypacusis” and “malingering” may be used to describe these cases. Pseudohypacusis refers to a hearing loss without an organic basis; malingering describes cases of willful simulation of a hearing loss. Regardless of the motivation for pseudohypacusis, the audiologist’s responsibilities are to determine whether an individual is providing accurate thresholds and, if not, to estimate the true audiometric thresholds.

Some behavioral signs of pseudohypacusis include exaggerated behavior during a case history interview or report of a hearing loss that is inconsistent with an individual’s apparent communication ability. Routine pure-tone and speech audiometry often reveal inconsistencies that are indicative of pseudohypacusis: poor agreement between repeated thresholds measured at one frequency (> 5 dB), poor agreement between average pure-tone thresholds and the speech recognition thresholds (> 6 dB), exaggerated inconsistency between average pure-tone thresholds measured with a descending procedure and speech recognition thresholds measured with an ascending procedure (> 10 dB) (Schlauch, Arnce, Olson, Sanchez, and Doyle, 1996), absence of a “shadow” threshold curve in the poorer ear in cases of a profound unilateral hearing loss, excellent speech recognition scores at relatively low presentation levels (20 dB above admitted threshold), and no response to unmasked bone conduction stimuli with bone conduction oscillator placement on the poorer ear.

In addition, acoustic reflex thresholds may be elicited at levels that are lower (better) than admitted behavioral pure-tone thresholds, confirming the presence of pseudohypacusis.

Pseudohypacusis frequently is resolved with reinstruction and repeated assessment of pure-tone thresholds using an ascending technique on the same day or on another day. Other behavioral techniques that can be used to estimate pure-tone thresholds with reasonable accuracy and without the need for special equipment are the Stenger test for cases of unilateral hearing loss (Newby, 1964) and the Sensorineural Acuity Level (SAL) test (Rintelmann and Harford, 1963). A simple technique is to count pulses of variable intensity (Ross, 1964), that is, to ask listeners to count a series of beeps. This technique is particularly useful for children.

For individuals whose exaggerated auditory thresholds do not resolve with these or other special behavioral techniques (e.g., Bekesy tracking, Jerger and Herer, 1961; delayed auditory feedback, Ruhm and Cooper, 1964), the ABR is the technique that will be chosen by most audiologists in the United States to estimate hearing sensitivity. Cortical

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×

evoked potentials, also known as late potentials, can also be used successfully in the objective assessment of hearing for adults (Coles and Mason, 1984; Hone et al., 2003; Hyde et al., 1986; Tsui et al., 2002). A discussion of techniques for management of pseudohypacusis can be found in a review chapter by Snyder (2001).

Multiple Conditions

Patients may present with a variety of disabling conditions, in addition to the hearing loss, that may interfere with routine testing procedures. For example, a severe neurological or motor problem that prevents an individual from providing a time-locked behavioral response requires modification of the standard paradigm in order to obtain an accurate measure of threshold. Any behavioral response that is under the listener’s control may be used to signify signal detection, such as a finger motion, an eyeblink, or a directed eye gaze. Loss of visual acuity as the sole additional disabling condition generally has no effect on routine audiometric assessment procedures. However, this condition would negatively impact performance on measures that combine auditory and visual presentation of stimuli. Individuals with mental retardation or developmental disabilities may have difficulty responding to abstract pure-tone signals with a standard behavioral response. Techniques used for children of an equivalent developmental age may be applied with these persons. When modified procedures do not produce reliable thresholds (pure tone or speech) or suprathreshold speech recognition scores, then assessment with electrophysiological techniques such as ABR is often utilized.

Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 69
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 70
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 71
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 72
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 73
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 74
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 75
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 76
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 77
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 78
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 79
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 80
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 81
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 82
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 83
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 84
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 85
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 86
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 87
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 88
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 89
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 90
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 91
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 92
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 93
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 94
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 95
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 96
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 97
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 98
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 99
Suggested Citation:"3 Assessment of the Auditory System and Its Functions." National Research Council. 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Washington, DC: The National Academies Press. doi: 10.17226/11099.
×
Page 100
Next: 4 Testing Adult Hearing: Conclusions and Recommendations »
Hearing Loss: Determining Eligibility for Social Security Benefits Get This Book
×
Buy Paperback | $67.00 Buy Ebook | $54.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Millions of Americans experience some degree of hearing loss. The Social Security Administration (SSA) operates programs that provide cash disability benefits to people with permanent impairments like hearing loss, if they can show that their impairments meet stringent SSA criteria and their earnings are below an SSA threshold. The National Research Council convened an expert committee at the request of the SSA to study the issues related to disability determination for people with hearing loss. This volume is the product of that study.

Hearing Loss: Determining Eligibility for Social Security Benefits reviews current knowledge about hearing loss and its measurement and treatment, and provides an evaluation of the strengths and weaknesses of the current processes and criteria. It recommends changes to strengthen the disability determination process and ensure its reliability and fairness. The book addresses criteria for selection of pure tone and speech tests, guidelines for test administration, testing of hearing in noise, special issues related to testing children, and the difficulty of predicting work capacity from clinical hearing test results. It should be useful to audiologists, otolaryngologists, disability advocates, and others who are concerned with people who have hearing loss.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!