Overview of the Problem and Introduction
People serving in the military, especially those in areas of combat, will at some point be exposed to high-intensity noise of various types. Two possible consequences of such exposures are the development of a hearing loss, most prominent for high-frequency sounds, and tinnitus, typically referred to as “a ringing in the ears.” Depending on a variety of factors, these effects may be either temporary or permanent consequences of such an exposure.
If documentation of the existence of hearing loss or tinnitus at discharge from the military is missing, it is nearly impossible to determine whether hearing loss or tinnitus detected by audiometric testing later in life is the result of noise exposure during prior military service. Both noise and aging, for example, result in similar high-frequency hearing loss, although the specific patterns of hearing loss resulting from each are generally distinguishable until 60–70 years of age (see Chapter 2). This adds to the challenge of determining the cause of the hearing loss when the only existing documentation consists of hearing thresholds measured late in life and many years after military service. In addition, it is quite likely that an individual might have experienced other hazardous noise exposures subsequent to discharge from military service that could result in significant noise-induced hearing loss or tinnitus. After the fact, for example, there are no means currently available to distinguish the hearing loss resulting from several years of military service from the noise-induced hearing loss resulting from subsequent work in a noisy industry or from participation in a wide variety of recreational activities, such as hunting (e.g., Clark, 1991). This serves to underscore the importance of measuring hearing thresholds
at enlistment and at discharge, with annual measurements in between for those most at risk for noise-induced hearing loss and tinnitus.
These uncertainties regarding noise-induced hearing loss and tinnitus have placed the Department of Veterans Affairs (VA) in a quandary. Frequently, VA personnel are called on to determine whether the hearing loss measured in a 70- or 80-year-old veteran is due to this individual’s prior military service. Furthermore, this assessment frequently must be done in the absence of documentation of the measurement of hearing thresholds at or around the time of military service (see Chapter 6). Even with a detailed case history from the veteran, it is next to impossible to draw a conclusion, with any degree of certainty, regarding the association of hearing loss in an older person with prior military service unless audiometric data acquired at entrance into and separation from military service are available.
VA reported that the 2.5 million veterans receiving disability compensation at the end of fiscal year 2003 had approximately 6.8 million separate disabilities related to their military service (Veterans Benefits Administration, 2004).1 Disabilities of the auditory system, including tinnitus and hearing loss, were the third most common type, accounting for nearly 10 percent of the total number of disabilities among these veterans. For the 157,935 veterans who began receiving compensation in 2003, auditory disabilities were the second most common type of disability. These veterans had 75,316 disabilities of the auditory system out of a total of some 485,000 disabilities of all types. At the end of 2004, the monthly compensation payments to veterans with hearing loss as their major form of disability represented an annualized cost of some $660 million (Department of Veterans Affairs, 2005a). The corresponding compensation payments to veterans with tinnitus as their major disability were close to $190 million on an annualized basis (Department of Veterans Affairs, 2005b). Such staggering human and financial costs have served as the rationale for many reports examining hearing loss among military service members over the past several decades (e.g., Johnson, 1957; Yarington, 1968; Walden et al., 1971; Edwards and Price, 1989; Donahue and Ohlin, 1993; Rench et al., 2001).
CHARGE TO THE COMMITTEE
The charge to this committee arose from Public Law 107-330, which required VA to contract with the National Academies to review and evalu-
ate the available scientific evidence regarding the presence of noise-induced hearing loss and tinnitus in U.S. military personnel from World War II through 2002, when the legislation was enacted. Section 104 of this legislation is provided in Appendix A.
The National Academies assigned this work to the Medical Follow-up Agency of the Institute of Medicine (IOM). IOM staff worked with the VA to establish the following Statement of Task for the committee:
An expert committee will provide recommendations to the Department of Veterans Affairs (VA) on the assessment of noise-induced hearing loss and tinnitus associated with military service in the Armed Forces. The committee will review staff-generated data on compliance with regulations regarding audiometric testing in the services at specific periods of time since World War II, review and assess available data on hearing loss in former service members, identify sources of potentially damaging noise during active duty, determine levels of noise exposure necessary to cause hearing loss or tinnitus, determine if the effects of noise exposure can be of delayed onset, identify risk factors for noise-induced hearing loss, and identify when hearing conservation measures were adequate to protect the hearing of service members. This study was mandated by Congress in Section 104 of Public Law 107-330. The committee will conduct its business through meetings over the course of the 24-month study and will issue a final report at the end of the study period.
Staff of the Medical Follow-up Agency will identify veterans from each of the armed services (Army, Navy, Air Force, Marine Corps, and Coast Guard) and from each of the time periods from World War II to the present. A sample of the service medical records of these individuals will be obtained, examined for regulatory compliance regarding audiometric surveillance (including reference, periodic, and termination audiograms), abstracted, recorded, and tabulated.
The charge does not include consideration of effects of noise other than upon the auditory system, including hearing loss and tinnitus, nor of the issues surrounding assisted hearing through hearing aids or prosthetic devices. The study committee was selected to include members with expertise in audiology, bioacoustics, military preventive medicine, occupational medicine, industrial hygiene and hearing conservation programs, epidemiology, and otology.
It should be noted that Public Law 107-330 makes frequent reference to “acoustic trauma” in its charge to the committee (see Appendix A). At the committee’s initial meeting in May 2004, discussion with congressional staff members clarified that the intent of the legislation was not the study of “acoustic trauma,” which is a narrowly defined type of damage resulting from short-term, high-intensity noise exposure, but a study of the more broadly defined “noise-induced hearing loss,” of which acoustic trauma is a
subtype. It was also determined that the committee’s charge did not include assessment of the disability or handicap resulting from noise-induced hearing loss or the means of assigning compensation to specific amounts or degrees of disability. The preceding Statement of Task incorporated these clarifications of the committee’s charge.
The committee met five times from May 2004 through March 2005 and held numerous telephone conference calls through August 2005. During these meetings and conference calls, the committee reviewed and discussed the existing research literature on the topics central to its charge and received information during oral presentations made by representatives from various organizations, including several veterans and representatives of veterans’ organizations, branches of the military, and consultants. In addition to these face-to-face meetings and telephone conference calls, the committee communicated frequently among themselves and with IOM staff via e-mail. This report represents the product of that information gathering and discussion. It has been divided into seven chapters. The primary purpose of this chapter, in addition to outlining the issues and the chronology of events subsequent to the passage of Public Law 107-330 as noted above, is to provide general background concerning the primary topics discussed in the ensuing chapters.
ACOUSTICS AND NOISE
Sound is produced by the propagation of pressure waves through a medium and originates from vibrating objects or from the rapid discharge or dissipation of energy, as in an explosive event. The pressure waves trigger responses in the auditory system of the listener. Noise generally refers to disagreeable or unwanted sound.
The magnitude or amplitude of a sound, including noise, can be measured in terms of sound pressure, in units of pascals, or sound intensity, in units of watts/m2. More commonly, however, the level of the sound is expressed in terms of decibels (dB), which represent a logarithm of the ratio of two sound pressures or the two corresponding sound intensities. Specifically, the reference quantity in the denominator of the ratio is either a sound pressure of 20 micropascals or a sound intensity of 10-12 watts/m2. The reference sound pressure level (SPL) for computation of decibels related to acoustic measurements was selected so that 0 dB SPL corresponds approximately to the lowest mid-frequency sound pressure that can be heard by the average normal-hearing young adult under ideal free-field listening conditions. At the other end of the scale, the maximum sound level that can be tolerated by most listeners is 120 dB SPL, with values exceeding 140 dB SPL, even for a brief instant, potentially resulting in permanent damage to the ear.
For application in the areas of the effects of noise on hearing, the sound levels are usually measured after being passed through a standardized filtering network, known as A-weighting, that attenuates the amplitude of the sound at frequencies below 500 Hz and above 10,000 Hz to roughly correspond to the perceived loudness of sound (see Appendix C). Sound levels measured with this filtering network are designated dBA.
For very brief impulse sounds, there are two common ways to express the level in dB. One is simply to use the fast-acting “peak” setting of a sound-level meter that is capable of measuring the true peaks of the sound wave. Such measures are often denoted as dBP. Another approach is to adjust the peak amplitude of the waveform for a steady-state sound (usually a 1000-Hz pure tone) so that it matches the peak amplitude of the waveform for the impulse. The level of the matching steady-state sound can then be measured with a sound level meter and, when doing so, the impulse is said to have the same “peak equivalent dB SPL” (pe dB SPL).
In addition to the overall level of the noise in dB, there are many other ways to characterize the relevant acoustic parameters of a noise. For the most part, however, these descriptions focus on characterizing the noise in either the time domain or the frequency domain. The frequency content of two noises, each with an overall level of 100 dBA, for example, can have a significant bearing on the resulting hearing loss measured (if any). Generally, all else being equal, sounds in the frequency range 2000–5000 Hz tend to be more damaging to human hearing than sounds with energy at lower or higher frequencies. With regard to the time domain, all else being equal, brief sounds are less damaging than longer sounds. For example, sounds with durations of less than a few milliseconds, frequently referred to in the present context as impulse noise, must exceed peak levels of 140 dBA to be considered hazardous, whereas a 15-minute steady-state sound is considered hazardous when its level exceeds 100 dBA (e.g., DoD, 2004). In the latter case, “hazardous” to hearing does not mean that hearing loss will occur following a single such exposure. With steady-state noise, the hazard occurs following repeated daily exposures for several years. This is the more common form of noise-induced hearing loss (NIHL), rather than that associated with a single extreme noise exposure, which is more appropriately referred to as “acoustic trauma.”
Research over the past 60 to 70 years has shown that each of these acoustic parameters of noise—its sound pressure level, duration, type (impulse versus steady-state), and frequency content—can influence the hearing loss that is measured following the exposure to noise. The major influences of noise level and daily duration of exposure are captured in a single simplified metric, the noise dose. The noise dose represents the integration of noise level (more accurately, the underlying physical quantities) over the
entire time of exposure. For a given exposure, the dose is of critical importance when evaluating the potential hazard to hearing of a particular noise.
The primary importance of the noise dose was recognized many years ago by the scientific community and has been incorporated into national and international standards designed to estimate the noise-induced hearing loss resulting from noise exposure (ISO-1999 [ISO, 1990]; ANSI S3.44 [ANSI, 1996]). Most often, the noise dose is specified in terms of the 8-hour equivalent continuous noise level in dBA and is derived from the time-weighted average (TWA) of the underlying physical quantities (e.g., sound pressure). When establishing a specific noise dose, a device known as a noise dosimeter is used. Parameters built into the noise dosimeter that can impact the measured noise dose include a dosimeter-specific threshold level, below which sound levels will not be measured, a criterion level, and an exchange rate. The latter two parameters are prescribed by various noise standards. Currently, a criterion level of 85 dBA and an exchange rate of either 3 dB or 5 dB are among the most widely implemented values. The exchange rate describes the trading relation between sound level and exposure duration that yields equivalent hazard for successive halvings of the exposure duration. To illustrate the tradeoff between sound level and duration of exposure that is built into noise dosimetry, assuming a criterion level of 85 dBA and a 3-dB exchange rate, an 8-hour continuous exposure to steady-state noise at 85 dBA would have the same noise dose as 88 dBA for 4 hours, 91 dBA for 2 hours, or 94 dBA for 1 hour.
For the specification of noise levels or dose to be relevant, the measures of noise dose must be made at the location of the individual being exposed and under actual or representative conditions. Ideally, it is the noise dose measured at the individual’s ear at the time of exposure that is desired. Knowing, for example, that a jet engine produces a sound level of 140 dBA in the sound field at 1-meter distance is not particularly informative for an individual working daily (8 hours) about 30 meters from the engine (and while wearing a helmet or hearing protection). Assuming free-field conditions (and a point source), inverse-square-law behavior indicates that the increase in distance from the sound source will reduce the sound level to about 110 dBA at 32 meters. Furthermore, if a helmet or other hearing protection device is worn at the time of exposure, the noise level at the ear could be reduced by an additional 20–25 dB to safe levels.
It is known, however, that equivalent noise doses do not always yield equivalent noise-induced hearing loss. For example, consider three sequential noise exposures: one to a steady-state low-frequency noise, another to a steady-state high-frequency noise, and a third exposure to a series of impulses. Although the noise dose remains constant regardless of the sequence of these three noise exposures, the resulting hearing loss varies significantly with the order of their presentation (Mills, 1992; Ward, 1991).
THE MEASUREMENT OF HEARING AND TINNITUS
When studying the effects of noise on the auditory system, the two most common auditory complaints manifested in humans are hearing loss and tinnitus. In humans, hearing loss is typically measured by behaviorally determining the minimum sound pressure level that can be heard about 50 percent of the time, defined as the hearing “threshold,” at each of several frequencies using pure tones ranging from 250 to 8000 Hz in octave steps. Pure tones at frequencies of 1500, 3000, and 6000 Hz are also frequently included, especially when sharp declines in high-frequency hearing are observed or anticipated. If an individual listener has a hearing threshold at a particular frequency that agrees perfectly with standardized values representing the average thresholds measured in a large group of young, normal-hearing adults for this same frequency, then the hearing threshold is said to be 0 dB “hearing level” or 0 dB HL. Hearing thresholds at each frequency from 250 to 8000 Hz do not need to be precisely equal to 0 dB HL, however, to be considered “normal.” Rather, there is a range, generally accepted to be -10 to 25 dB HL at each frequency, that is considered to be representative of “normal” hearing in adults. The reliability of the clinical measurement of behavioral hearing thresholds in humans is such that a difference in thresholds must exceed 5 dB to be considered clinically significant (whether the threshold comparisons are across frequency at the same time or across time at the same frequency). As hearing thresholds increase beyond the maximum level of the normal range (> 25 dB HL), the degree of hearing loss in adults can be described as mild, moderate, moderately severe, severe, or profound. Table 1-1 shows threshold levels corresponding to each of these categories of hearing loss.
As described in more detail in Chapter 2, a hallmark of noise-induced hearing loss is the appearance of a hearing loss for high-frequency sounds, with the worse hearing thresholds typically occurring at frequencies of 3000–6000 Hz. Frequently, hearing is normal or near normal at lower frequencies (< 1000 Hz) and also returns toward the normal range at 8000 Hz. The result
TABLE 1-1 Categories of Hearing Loss and Corresponding Pure-Tone Thresholds for Adults
Category of Hearing Loss
< 25 dB HL
26–40 dB HL
41–55 dB HL
56–70 dB HL
71–90 dB HL
> 90 dB HL
is a pattern of hearing loss across frequency referred to as a “noise notch” (see Chapter 2). It is usually the noise-notch pattern of hearing loss across frequencies, together with supporting evidence from a detailed case history, that lead to the diagnosis of noise-induced hearing loss. Because noise-induced hearing loss is confined primarily to frequencies at or above 2000 Hz, the effects on auditory perception can be subtle. In addition to producing difficulty hearing high-frequency pure tones, the hearing loss may frequently have a negative impact on the perception of other high-frequency sounds, including several consonant sounds of speech and many environmental sounds. These difficulties may not be readily apparent in quiet listening conditions, but become prominent when there are competing sounds in the background, such as noise or other people talking. As the condition grows more severe, it can interfere with the ability to function socially and professionally (see NRC, 2005). Currently, damage to the ear as a result of noise exposure is not reversible in humans. The most common treatment is amplification of sound through the use of hearing aids.
Tinnitus is the other common complaint of those exposed to hazardous noise (see Chapter 4). Noise-induced tinnitus is a subjective, self-reported phenomenon that, unlike hearing loss, cannot be verified objectively, although certain perceptual attributes (e.g., loudness and pitch) can be established reliably under controlled conditions (psychoacoustic testing) (see Chapter 4). In addition, the mechanisms underlying tinnitus are less well understood than those underlying noise-induced hearing loss. Self-report questionnaires reveal a wide range of severity that is not directly correlated with the severity of any associated noise-induced hearing loss. Some individuals find the effects of tinnitus to be more debilitating than the effects of hearing loss. Although no current form of treatment can eliminate tinnitus, various treatment approaches are being used to reduce the adverse impact of tinnitus and potentially find a cure.
STUDYING THE EFFECTS OF NOISE ON HEARING AND TINNITUS
The modern era of research on the effects of noise on hearing began in the 1940s (e.g., Davis et al., 1949). Most of this research falls into one of three categories: (1) prospective studies of temporary hearing loss in humans; (2) retrospective analyses of permanent hearing loss in humans; and (3) laboratory animal studies of both temporary and permanent effects of noise on the auditory system. Research on tinnitus has used similar approaches, but the subjective nature of tinnitus has posed additional challenges as investigators have worked to develop and validate animal models of tinnitus.
Studies of Temporary Threshold Shift
Prospective studies of temporary hearing loss in humans have followed similar protocols. First, preexposure hearing thresholds are measured for pure tones at one or more frequencies. Next, the listener is exposed for minutes or hours to a sound of some type and level. Finally, the postexposure hearing threshold is measured immediately following cessation of the exposure. Depending on the specific acoustic parameters for the noise exposure, the postexposure hearing thresholds may or may not be greater than the corresponding preexposure hearing thresholds. If the thresholds have worsened from preexposure to postexposure, this is designated as a temporary threshold shift (TTS). Its transient nature is confirmed through repeated postexposure measurements at different recovery times that reveal an eventual return to the preexposure hearing thresholds.
Some key advantages of this approach to the study of the effects of noise on hearing include the use of human subjects rather than laboratory animals, precise control of the acoustical parameters of the noise exposure, and careful measurement of hearing thresholds under optimal listening conditions. The primary shortcoming of this approach has been in generalizing the results of short-term experiments on TTS to the permanent hearing loss resulting from years of repeated exposures or exposure to high noise levels. That is, TTS may not be predictive of eventual permanent changes in hearing thresholds. There have been some suggestions that the mechanisms underlying the temporary and permanent changes in hearing following noise exposure may be different (Nordmann et al., 2000). In addition, ethical considerations restrict the range of exposure conditions that can be examined while still ensuring that the hearing loss is only temporary.
Studies of Permanent Threshold Shift
The laboratory studies of TTS in humans have been complemented by retrospective field studies of permanent threshold shifts (PTSs) in humans. The basic approach here has been to study the hearing loss measured in workers employed in industries with well-defined noise exposures for periods of many years. Of course, use of a valid and reliable means of measuring hearing thresholds is critical. Furthermore, these studies have almost always been cross-sectional rather than longitudinal.
The advantages inherent to retrospective field studies of PTS include the use of human subjects and more direct study of the permanent effects of noise on hearing than drawing inferences from the study of temporary effects. Disadvantages include the cross-sectional nature of the data, resulting in possible cohort bias, and limitations in the ability to generalize from
the data because the study populations are often highly selected. With regard to cohort bias, the assumption in the cross-sectional approach is that the separate groups or cohorts differ only in terms of the independent variable under investigation. For field studies of PTS, the independent variable of interest is often the length of noise exposure. In this case, there may be other confounding factors associated with differences in experiences across generations, rather than with length of noise exposure. The hearing loss in a group of 40-year employees born in 1920, for example, might differ from the hearing loss in a group of 20-year employees born in 1940 for reasons other than their age and length of employment in a noisy industry.
Another important weakness of these studies is the lack of control over the noise exposure, which can vary from individual to individual or location to location, as well as the lack of control over noise exposures occurring outside of the workplace. In addition, a central issue in all such studies involves the interaction of the effects of noise and aging on hearing. Studies of noise-induced PTS, for example, usually “correct” the actual hearing thresholds measured by an amount corresponding to the hearing loss that is assumed to occur in individuals of the same age who were not exposed to noise. One such correction is simple decibel additivity in which the average age-associated hearing loss is subtracted from that measured in the individual, with the resulting amount being considered the “noise-induced permanent threshold shift,” or NIPTS. For example, if a 60-year-old man has worked in a specific noise environment for 40 years and has a hearing loss of 70 dB HL at a particular frequency, and the typical hearing loss at this same frequency is 40 dB HL for a 60-year-old man who has not worked in noise, then the NIPTS is presumed to be 30 dB according to this simple dB-additivity rule. The implications of this rule are discussed in greater detail in Chapter 2, but it is apparent that both the amount of hearing loss assumed for a comparable age- and gender-matched non-noise-exposed cohort, as well as the manner in which the age-related and noise-related hearing loss combine, are critical to the derivation of the NIPTS and our understanding of it.
Laboratory Animal Studies
Animal studies of TTS and PTS, as well as other aspects of the effects of noise on hearing, offer an approach that eliminates several of the disadvantages inherent in human studies noted above. Specifically, noise exposures can be under strict control and both TTS and PTS can be measured in the same animals at various times during the animals’ life spans. Furthermore, a variety of other measures beyond behavioral measurement of hearing thresholds, including a number of physiological and anatomical measurements, can also be obtained from the animals following completion of a
particular noise exposure. The primary disadvantage to this approach, however, lies in the difficulty in generalizing the findings to humans, especially with regard to hazardous noise doses. Frequently, only qualitative comparisons can be made across species. The species most commonly used in laboratory studies of noise-induced hearing loss have been the guinea pig, chinchilla, gerbil, and cat.
In summary, each of the three fundamental approaches to the study of the effects of noise on hearing that have been used over the past 60–70 years has inherent advantages and disadvantages. The scientific community’s understanding of the effects of noise on hearing has been enhanced through integration of findings making use of all of these approaches.
APPROACHES TO HEARING CONSERVATION
Three approaches can be taken to reduce the occurrence of noise-induced hearing loss and tinnitus, whether from industrial or military exposures to noise. First, through engineering, the equipment or devices producing the noise can be redesigned to reduce the sound levels generated at the source. Although there have been successful efforts to do so in many branches of the military (e.g., Yankaskas and Shaw, 1999), there are limitations to the effectiveness of this approach to hearing conservation. Many military operations, especially those on the battlefield or in the training for the battlefield, are inherently noisy. A second approach is to identify those individuals who are susceptible to noise-induced hearing loss or tinnitus prior to exposure to high-intensity sound and isolate or protect those with greater vulnerability to the damaging effects of noise. Identification of individual differences in susceptibility to noise-induced hearing loss, however, has proven to be an elusive goal (see Chapter 2). A third approach is to design and implement a hearing conservation program, which can also contribute to protection against tinnitus. Such programs educate noise-exposed populations about the hazards of high-intensity noise, measure the hearing thresholds of personnel on a regular basis, and instruct individuals in the use of personal hearing protection devices. In this approach, the goal is to attenuate the noise to safe levels at the ears of at-risk individuals. Hearing conservation programs focus on the prevention of damage to hearing and do not typically include work with hearing aids or other devices to assist individuals with hearing impairments. The implementation of hearing conservation programs has been the most viable approach in the majority of industrial and military settings and is frequently the method of choice, but the effectiveness of the programs varies. In the military, the types of hearing protection devices available include earplugs, earmuffs, and helmets. The attenuation characteristics of these devices, as well as the pros and cons of each type, are reviewed in Chapter 5.
The use of hearing protection devices as the primary means of hearing conservation, however, has some limitations, especially in the military context. For example, there are many unexpected exposures to high-intensity sounds in the military, especially under combat conditions or training for such conditions (see Chapter 3). Depending on the specific circumstances of the exposure (see Chapter 2), it is possible for a single such exposure to result in significant hearing loss and tinnitus (e.g., Mrena et al., 2004). Furthermore, the most commonly used hearing protection devices have been conventional passive devices that provide the same amount of attenuation regardless of sound level. As a result, the device designed to protect the wearer’s hearing from high-level noise also makes it difficult to hear lower level sounds, such as the voice of a commander, a fellow soldier, or an approaching enemy. The recent widespread introduction (in 2004) into the military of level-dependent hearing protection devices designed to provide increasing attenuation for higher level impulse sounds, leaving low- and moderate-level sounds unaffected, is a potentially important development.
Research is also being done to explore pharmacological approaches to reducing susceptibility to noise-induced hearing loss. For example, studies with laboratory animals have found beneficial effects from the administration of antioxidants (e.g., Henderson et al., 1999; Kopke et al., 2005; McFadden et al., 2005). A clinical trial is testing an antioxidant compound in Marine Corps recruits (Boswell, 2004), but results had not been reported at the time the committee completed its work. Studies in animals and humans have also investigated protective effects of supplemental oral magnesium (e.g., Attias et al., 1994; Scheibe et al., 2000; Attias et al., 2004).
Various means have been used to evaluate the effectiveness of hearing conservation programs. These approaches to evaluation are described in Chapter 5. One metric with widespread use in the military is the measurement of significant threshold shift (STS). Although the precise definition of STS has changed in the military over time and across branches of the military (see Chapters 3 and 5), the basic approach has been to try to identify individuals as soon as they show any signs of possible noise-induced hearing loss. It is important to note, however, that STS is not a measure of hearing loss in dB HL. Rather, it is a relative shift in threshold between the current hearing threshold and a previously established reference threshold for that same individual. If 10 dB is used to define an STS, for example, then this could represent a change in hearing from 0 to 10 dB HL (both still within “normal” hearing) between the two measurements or from 20 to 30 dB HL (from “normal” hearing to “mild” hearing loss).
Regular measurement of hearing is critical to evaluating programs. The participants in hearing conservation programs of the military are currently required to have hearing thresholds measured annually. Obviously, if this is not taking place, then an STS-based approach to hearing conservation will
not be appropriate. As a result, one measure of program effectiveness can simply be the percentage of individuals in the program who receive the required annual measurement of hearing thresholds. Another measure of program effectiveness is the incidence of STS among those individuals in the hearing conservation program. Ideally, STS-based approaches include steps to verify that observed STS values are not the result of TTS, usually through follow-up measures of hearing thresholds obtained after prescribed periods of quiet. STS cases that remain unchanged at follow-up, or those STS cases that do not receive follow-up, are considered to be permanent threshold shifts. The incidence of PTS cases, therefore, is another possible metric of hearing conservation program effectiveness. Military hearing conservation programs currently mandate follow-up testing, but it is not always completed. These metrics are examined in greater detail in Chapters 3 and 5.
EVALUATING THE STRENGTH OF EVIDENCE
To address the questions posed to the committee by the statement of task, efforts were made to identify a relevant body of evidence through searches of the indexed medical literature and catalogues of reports prepared by or for the military services. Studies and reports were also identified from the reference lists of other documents, and some documents were provided by the military services at the committee’s request.
Published peer-reviewed reports generally carried the most weight in drawing conclusions because the methods and findings of those reports could be assessed. Reports that had not undergone peer review and some unpublished data were also considered by the committee and evaluated in the context of the available body of published literature.
Ideally, in addressing the charge to the committee, the committee would have preferred to draw on data from reports of longitudinal, population-based studies of noise-induced hearing loss or tinnitus in humans in military settings. Clearly, such studies would offer the greatest strength of evidence to support the committee’s findings and recommendations. Unfortunately, there are few such studies. Therefore, the committee was compelled to turn to other sources of evidence to address its charge.
The sources of evidence considered by the committee included epidemiological, laboratory, and clinical studies directly addressing the question at hand. Epidemiological studies generally carry the most weight in evaluating evidence for or against an association between an exposure (noise) and the resulting health outcome (hearing loss or tinnitus) in humans. These studies measure health-related exposures and outcomes in a defined set of human subjects and use that information to make inferences about the nature and strength of associations between such exposures and outcomes in the population from which the study sample was drawn. Epidemiological
studies can be categorized as experimental (clinical trial) or observational and as controlled (analytic) or uncontrolled (descriptive).
The primary outcome of interest in epidemiological studies is usually the incidence or prevalence of the health condition under investigation. The incidence of a particular condition refers to the number of newly occurring cases of that condition that develop over a specific period of time in a particular population and is expressed as either a risk (a probability) or a rate. A condition’s prevalence is the proportion of individuals in a sample who have that condition at a single point in time or during an interval of time. Risk, in the epidemiological sense, is defined as the probability of developing a particular health condition. The term “relative risk” refers to the ratio of the incidence of the condition in a population exposed to some potential hazard of interest, such as occupational noise, to the corresponding incidence in a similar but nonexposed group. Cross-sectional studies do not directly measure the risk associated with an exposure for two important reasons: (1) these studies do not automatically define whether the exposure or the condition came first; and (2) cross-sectional samples usually contain old as well as new cases (i.e., incident and prevalent cases), further obscuring the temporal sequence of exposure and condition.
Among the various epidemiological designs, experimental studies generally have the advantage of random assignment to exposures and, therefore, have the potential to be the most influential in assessing the strength and direction of an association, although they are subject to a potential selection bias. Experimental studies of noise exposure and hearing loss or tinnitus in humans must be designed to prevent permanent harm. As a result, such studies can be conducted to study only those exposures resulting in temporary hearing loss or tinnitus.
Most of the epidemiological studies considered by the committee were observational. There were few prospective observational studies relevant to the committee’s charge. Most were cross-sectional rather than longitudinal. Observational studies that compare exposed subjects and unexposed controls are more definitive than uncontrolled studies, but uncontrolled studies are also important for showing the presence of an outcome in an exposed population. Most of the epidemiological studies considered by the committee were studies without control groups. Furthermore, the biggest drawback with cross-sectional epidemiological studies is that the outcome of interest (e.g., a hearing threshold > 25 dB HL) is measured only once—at the time of the study or some other specified point—making it impossible to demonstrate that the outcome occurred after the exposure of interest (e.g., noise), a temporal relationship necessary to establish a causal link between the two.
Among epidemiological research designs, case reports and case series are generally weakest. They are inadequate by themselves to establish an association, but they can be valuable in drawing the attention of the scien-
tific community to the problem and in generating testable hypotheses. The committee did not rely on case reports in reaching its conclusions.
The vast majority of data available on noise-induced hearing loss and tinnitus in military personnel is not epidemiological. The data came from a variety of clinical, descriptive, cross-sectional studies of variously defined groups of military personnel. The data were reported in ways that gave little or no indication of the prevalence or incidence of either hearing loss or tinnitus. Instead, the dependent measures were generally hearing thresholds at various pure-tone frequencies, which were reported as average thresholds for groups defined by age or length of service in the military. In the absence of control groups in most of these studies, the committee turned to standardized compilations of “control data” on hearing thresholds for groups of screened or unscreened individuals of various ages for comparison purposes. These data and the limitations associated with this approach are described in more detail in Chapter 3.
Some of the questions posed in the charge to the committee could be addressed by existing data on noise-induced hearing loss, much of which is based on laboratory studies of humans and animals. As noted, the laboratory studies in humans are necessarily restricted to the investigation of temporary effects, either TTS or temporary tinnitus. Central to this experimental approach is the assumption that the TTS observed at 2 minutes postexposure has a defined relationship with the PTS that occurs in humans following 10–20 years of exposure to industrial noise (CHABA, 1968). To the extent that this association is valid, laboratory studies of TTS in humans can provide insights into exposure parameters affecting PTS. In nearly all cases of laboratory studies in humans, the dependent variable has been some measure of hearing threshold. These studies cannot provide precise estimates of the risk of experiencing hearing loss or tinnitus from noise exposure.
As noted previously, studies of noise-induced hearing loss in laboratory animals make it possible to examine the associations among TTS, PTS, and underlying cochlear damage in the same set of subjects and under strict laboratory control of the exposure. Such data represent a powerful tool for understanding the mechanisms underlying noise-induced hearing loss in a variety of mammalian species, including variables impacting the development of and recovery from noise-induced hearing loss, as well as establishing the relationships among TTS, PTS, and underlying pathology. The dependent measures in these laboratory studies typically include some measure of hearing thresholds, measured behaviorally or physiologically, and measures of anatomical damage. Where the results from laboratory animal studies are consistent with findings from human studies, they add assurance that the human results are biologically reasonable.
Some aspects of the committee’s charge were best addressed with data from well-designed and carefully executed human epidemiological studies.
When such data were not available, the committee turned to alternate data with the resulting caveats to its findings noted. Other aspects of the committee’s charge were best addressed with data from well-designed and carefully executed laboratory studies with humans and animals. Both forms of evidence are considered valid, depending on the issue or questions being addressed, and have been weighed by the committee in evaluating the strength of evidence supporting its findings.
With the foregoing in mind, the committee adopted the following scale for the strength of evidence. As will be seen, the strength of evidence in this scale is tied to the presence and number of “strong studies” supporting a particular committee finding. In general, observational epidemiological studies cannot by themselves establish causal associations. Strong epidemiological studies in support of a statistical association between an exposure and a condition, whether causal or not, could include well-designed cross-sectional studies where the likelihood of chance findings has been minimized, known confounding factors have been considered in the analysis, and known or potential biases have been eliminated. However, in support of a causal association, “strong studies” are generally well-designed, prospective observational human population studies or randomized controlled trials in which chance, bias, and confounding are similarly treated. With respect to laboratory studies, “strong studies” are well-designed and carefully executed and interpreted human or animal studies in which chance, bias, and confounding have also been treated in a similar way.
Sufficient evidence of a causal relationship: Consistent evidence from many strong longitudinal studies.
Sufficient evidence (of an association): Evidence from several strong longitudinal or cross-sectional studies.
Limited or suggestive evidence (of an association): No evidence from strong studies, but some evidence from other studies of sufficient quality.
Not sufficient evidence to determine whether an association exists: Few or no studies of sufficient quality.
Sufficient evidence that no association exists: Several strong studies that find no association.
However, when applying the foregoing scale for strength of evidence, the context of the specific question being addressed must be kept in mind. For example, if the specific question posed or the issue addressed pertains to the effect of noise on humans and the only evidence available is from studies of laboratory animals, this evidence is considered not to be sufficient regardless of the number of “strong” studies available from laboratory animals.
THE COMMITTEE’S REPORT
The remainder of the report summarizes the evidence regarding the questions put to the committee concerning military service and noise-induced hearing loss and tinnitus and presents the committee’s findings. Chapter 2 reviews the mechanisms of noise-induced hearing loss and evidence regarding the impact of various risk factors. Chapter 3 reviews noise and noise hazards associated with military service. Chapter 4 focuses on tinnitus, especially its association with noise exposure and hearing loss. Chapter 5 turns to the nature and effectiveness of hearing conservation programs in the armed services. Chapter 6 presents the results of an audit of the service medical records of military personnel sampled from various periods of service from World War II to 2002. Finally, Chapter 7 provides a summary that draws on the information presented in preceding chapters to address the specific questions and issues posed in the Statement of Task and in Public Law 107-330.
ANSI (American National Standards Institute). 1996. ANSI S3.44 Determination of Occupational Noise Exposure and Estimation of Noise-Induced Hearing Impairment. New York: Acoustical Society of America.
Attias J, Weisz G, Almog S, Shahar A, Wiener M, Joachims Z, Netzer A, Ising H, Rebentisch E, Guenther T. 1994. Oral magnesium intake reduces permanent hearing loss induced by noise exposure. American Journal of Otolaryngology 15(1):26–32.
Attias J, Sapir S, Bresloff I, Reshef-Haran I, Ising H. 2004. Reduction in noise-induced temporary threshold shift in humans following oral magnesium intake. Clinical Otolaryngology and Allied Sciences 29(6):635–641.
Boswell S. 2004, February 17. New weapon against hearing loss? Marines participating in clinical trial to prevent hearing loss. ASHA Leader. Pp. 1, 20–21.
CHABA (National Academy of Sciences–National Research Council Committee on Hearing, Bioacoustics, and Biomechanics). 1968. Proposed Damage-Risk Criterion for Impulse Noise (Gunfire). Report of Working Group 57 (Ward WD, ed.). Washington, DC: National Academy of Sciences.
Clark W. 1991. Noise exposure from leisure activities: A review. Journal of the Acoustical Society of America 90(1):175–181.
Davis H, Parrack H, Eldredge D. 1949. Hazards of intense sound and ultrasound. Annals of Otology, Rhinology, and Laryngology 58(3):732–738.
Department of Veterans Affairs. 2005a. Service-Connected Compensation—Veterans with Hearing Impairment as of December 21, 2004. Data provided to the Institute of Medicine Committee on Noise-Induced Hearing Loss and Tinnitus Associated with Military Service from World War II to the Present, Washington, DC.
Department of Veterans Affairs. 2005b. Veterans Receiving VA Disability Compensation with Tinnitus as Their Major Disability and Their Monthly Dollar Awards by Period of Service. Data provided to the Institute of Medicine Committee on Noise-Induced Hearing Loss and Tinnitus Associated with Military Service from World War II to the Present, Washington, DC.
DoD (Department of Defense). 2004. Department of Defense Instruction 6055.12: DoD Hearing Conservation Program. Washington, DC: DoD.
Donahue AM, Ohlin DW. 1993. Noise and the impairment of hearing. In: Deeter DP, Gaydos JC, eds. Occupational Health: The Soldier and the Industrial Base. Falls Church, VA: Office of the Surgeon General.
Edwards RR, Price DR. 1989. Descriptive Analysis of Medical Attrition in U.S. Army Aviation. Fort Rucker, AL: U.S. Army Aeromedical Research Laboratory.
Henderson D, McFadden SL, Liu CC, Hight N, Zheng XY. 1999. The role of antioxidants in protection from impulse noise. Annals of the New York Academy of Sciences 884:368–380.
ISO (International Organization for Standardization). 1990. ISO 1999: Acoustics—Determination of Occupational Noise Exposure and Estimation of Noise-Induced Hearing Impairment. Geneva, Switzerland: ISO.
Johnson KO. 1957. Problems in military audiometry: A CHABA symposium. Veteran’s compensations for hearing loss. Journal of Speech and Hearing Disorders 22(5):731–733.
Kopke R, Bielefeld E, Liu J, Zheng J, Jackson R, Henderson D, Coleman JK. 2005. Prevention of impulse noise-induced hearing loss with antioxidants. Acta Otolaryngologica 125(3):235–243.
McFadden SL, Woo JM, Michalak N, Ding D. 2005. Dietary vitamin C supplementation reduces noise-induced hearing loss in guinea pigs. Hearing Research 202(1-2):200–208.
Mills JH. 1992. Noise-induced hearing loss: Effects of age and existing hearing loss. In: Dancer A, Henderson D, Salvi RJ, Hamernik RP, eds. Noise-Induced Hearing Loss. St. Louis, MO: Mosby Year Book. Pp. 237–245.
Mrena R, Savolainen S, Pirvola U, Ylikoski J. 2004. Characteristics of acute acoustical trauma in the Finnish Defence Forces. International Journal of Audiology 43:177–181.
Nordmann AS, Bohne BA, Harding GW. 2000. Histopathological differences between temporary and permanent threshold shift. Hearing Research 139(1-2):13–30.
NRC (National Research Council). 2005. Hearing Loss: Determining Eligibility for Social Security Benefits. Dobie RA, Van Hemel S, eds. Washington, DC: The National Academies Press.
Rench ME, Johnson S, Sanders T. 2001. Cost Benefit Analysis for Human Effectiveness Research: Bioacoustic Protection. Wright-Patterson Air Force Base, OH: Air Force Research Laboratory.
Scheibe F, Haupt H, Ising H. 2000. Preventive effect of magnesium supplement on noise-induced hearing loss in the guinea pig. European Archives of Oto-Rhino-Laryngology 257(1):10–16.
Veterans Benefits Administration. 2004. Veterans Benefits Administration Annual Benefits Report for FY 2003. Washington, DC: Department of Veterans Affairs.
Walden BE, Worthington DW, McCurdy HW. 1971. The Extent of Hearing Loss in the Army: A Survey Report. Washington, DC: Walter Reed Army Medical Center.
Ward WD. 1991. The role of intermittence in PTS. Journal of the Acoustical Society of America 48:561–574.
Yankaskas KD, Shaw MF. 1999. Landing on the roof: CVN noise. Naval Engineers Journal 111(4):23–34.
Yarington CT. 1968. Military noise induced hearing loss: Problems in conservation programs. Laryngoscope 78(4):685–692.