National Academies Press: OpenBook
« Previous: 3 Threats, Head Injuries, and Test Methodologies
Suggested Citation:"4 Combat Helmet Testing." National Research Council. 2014. Review of Department of Defense Test Protocols for Combat Helmets. Washington, DC: The National Academies Press. doi: 10.17226/18621.
×

4


Combat Helmet Testing

4.0 SUMMARY

This chapter describes how combat helmets are tested. It includes a brief summary of the testing process, a description of the test threats, and a discussion of the various sources of variation in the testing process.

4.1 INTRODUCTION

Federal government departments and agencies are required to “develop and manage a systematic, cost-effective government contract quality assurance program to ensure that contract performance conforms to specified requirements” (Title 48 of the Code of Federal Regulations, subpart 246.1) (CFR, 2013). In particular, first article testing (FAT)1 is conducted to ensure that “the contractor can furnish a product that conforms to all contract requirements for acceptance” (FAR, 2013). Once a contractor has passed FAT and begins production, lot acceptance tests (LAT)2 are used to assess whether combat helmets continue to conform to contract requirements during regular production.

As part of FAT and LAT, combat helmets are subjected to a series of ballistic and nonballistic tests. Ballistic tests assess the helmet’s ability to prevent penetration and limit helmet deformation to a given threshold. Nonballistic tests assess other helmet capabilities, including impact resistance, pad compression durability, coating adhesion durability, and helmet compression resistance testing. Helmets are also subjected to a series of inspections, such as whether the shell dimensions meet those specified in the purchase description. All of these tests and inspections are intended to assess whether a particular manufacturer’s product conforms to the government’s contract specifications as outlined in the purchase description (U.S. Army, 2012).

The goal of testing is to determine if the helmet is of acceptable quality based on a limited test sample. Not every helmet can be tested because the tested helmet is damaged in the testing process. Hence, decisions about the larger collection of helmets must be based on a limited test sample. Because only a sample of helmets can be tested, the resulting test conclusion is subject to uncertainty and unavoidable risks to both the Department of Defense and the manufacturer. Test protocol design requires making trade-offs between risks for both groups. The size of the risk for each group arises because of the test design and any limitation on resources.

4.2 BALLISTIC TESTING METHODOLOGY

The helmet ballistic testing methodology has been derived from existing body armor testing methods. The methodology for ballistic testing for body armor follows from testing done in the late 1970s by Prather et al. (1977) that, however tenuously, connects the current body armor methods and the test measures to some evidence of injury (NRC, 2010, 2012). For combat helmets, however, the current testing methods and measures have no connection to research on head and brain injury. The lack of connection between injury and current test methods and measures is a significant concern.

Test Processes

During a test, the helmet being tested is affixed to a headform packed with modeling clay, and a rifle-like device is used to fire various projectiles into the helmet. The clay is used as a recording medium for: (1) assessing penetration should the projectile or portions thereof pass through the helmet into the clay, and (2) measuring the deformation of the helmet, where an impression is left in the clay surface as a result of the ballistic impact pushing the helmet into the clay. Electronic instrumentation is used to measure projectile velocity before impact. Appendix E describes the ballistic testing process in more detail.

 

_________________

1The current DOT&E protocol for combat helmet first article testing is reprinted in Appendix B.

2The current DOT&E protocol for combat helmet lot acceptance testing is reprinted in Appendix B.

Suggested Citation:"4 Combat Helmet Testing." National Research Council. 2014. Review of Department of Defense Test Protocols for Combat Helmets. Washington, DC: The National Academies Press. doi: 10.17226/18621.
×

TABLE 4-1 DOT&E First Article Testing Helmet Test Matrix for the Advanced Combat Helmet

V50

Ambient

Hot

Cold

Seawater

Weatherometer

Accelerated Aging

2-grain

1 V50

1 V50

1 V50

1 V50

Size: Small

Size: Medium

Size: Large

Size: XL

             

4-grain

1 V50

1 V50

1 V50

1 V50

Size: XL

Size: Small

Size: Medium

Size: Large

             

16-grain

1 V50

1 V50

1 V50

1 V50

Size: Large

Size: XL

Size: Small

Size: Medium

             

17-grain

1 V50

1 V50

1 V50

1 V50

1 V50

1 V50

Size: Medium

Size: Large

Size: XL

Size: Small

Size: Large

Size: Medium

             

64-grain

1 V50

1 V50

1 V50

1 V50

Size: Large

Size: XL

Size: Medium

Size: Small

             

Small arms

1 V50

1 V50

1 V50

1 V50

1 V50

Size: Medium

Size: Small

Size: XL

Size: Large

Size: Medium

             

9-mm RTP/BTD shell

60 shots

60 shots

60 shots

60 shots

12 helmets

12 helmets

12 helmets

12 helmets

Sizes:

Sizes:

Sizes:

Sizes:

Small: 3

Small: 3

Small: 3

Small: 3

Medium: 3

Medium: 3

Medium: 3

Medium: 3

Large: 3

Large: 3

Large: 3

Large: 3

XL: 3

XL: 3

XL: 3

XL: 3

             

9-mm RTP hardware

17 shots

16 shots

16 shots

16 shots

9 helmets

8 helmets

8 helmets

8 helmets

Sizes:

Sizes:

Sizes:

Sizes:

Small: 2

Small: 2

Small: 2

Small: 2

Medium: 3

Medium: 2

Medium: 2

Medium: 2

Large: 2

Large: 2

Large: 2

Large: 2

XL: 2

XL: 2

XL: 2

XL: 2

             

Small arms RTP

17 shots

16 shots

16 shots

16 shots

17 helmets

16 helmets

16 helmets

16 helmets

Sizes:

Sizes:

Sizes:

Sizes:

Small: 4

Small: 4

Small: 4

Small: 4

Medium: 5

Medium: 4

Medium: 4

Medium: 4

Large: 4

Large: 4

Large: 4

Large: 4

XL: 4

XL: 4

XL: 4

XL: 4

NOTE: BTD, ballistic transient deformation; RTP, resistance to penetration; V50, velocity at which the probability of penetration is 0.5; XL, extra large. SOURCE: DOT&E (2011).

There are two types of measurements that are made on the tested helmet: (1) whether the bullet penetrates the helmet or not (called resistance to penetration [RTP]); and (2) if there is no penetration, a surrogate measure of the deformation of the helmet referred to as the backface deformation (BFD). These measures are formally defined in Chapter 5.

Per the Director, Operational Test and Evaluation (DOT&E) protocol, the test is conducted as a sequence of five ballistic impacts: one each to the front, rear, left, and right sides of the helmet and to the helmet crown. Both penetration and BFD, a measure of the indent in the clay caused by the ballistic forces from the bullet, are measured. Current protocol also tests the V50 ballistic limit using a series of 6 to 14 shots to the five regions of the helmet at varying velocities per MIL-STD-622F (DoD, 1987). (See Chapter 9 for further discussion of the methodology for estimating V50.)

For FAT, as shown in Table 4-1, 48 helmet shells are tested against the Remington 9-mm threat, and 35 helmets are tested for hardware. Another 65 helmets may be tested against a small arms threat (which is classified). In addition, 27 helmets are tested for V50. Table 4-1 specifies both the size of the helmet (small, medium, large, and extra large) and whether the helmet is exposed to a particular environment, such as ambient, hot, cold, seawater,3 weatherometer (accelerated test to mimic long-term exposure to weather), and other types of accelerated aging. Under the DOT&E protocol, within each set of tests (shell, hardware, and small

 

_________________

3The helmets the Army procures are used DoD wide, including both the Navy and the Coast Guard. Soldiers wearing helmets may also find themselves in a maritime environment while on Navy support troop-carrying vessels. The purpose of testing helmets that have been conditioned by seawater is to determine if the helmet material can withstand exposure in that environment without degraded ballistic performance.

Suggested Citation:"4 Combat Helmet Testing." National Research Council. 2014. Review of Department of Defense Test Protocols for Combat Helmets. Washington, DC: The National Academies Press. doi: 10.17226/18621.
×

arms), the results are combined across the helmet sizes and environments to assess whether FAT is passed or failed. The details are described in Chapters 5 and 6.

The current DOT&E testing methodology is based on a number of assumptions, including the following:

•   Shots are independent. In FAT and LAT each helmet is shot five times in five separate locations. The resulting analyses treat these shots as independent, combining all the shots across the helmets to assess RTP performance. This practice minimizes the number of helmets tested so that, to the extent that RTP failure is a rare, helmet-level event, this practice decreases the chances of selecting a defective helmet to test. That said, to the extent that the shots are truly independent this is appropriate. On the other hand, to the extent that they are not, this practice introduces a bias in favor of soldier safety because helmets are stressed beyond what is likely to occur in the field.

•   Helmet performance is equivalent across testing environments. In FAT, helmets are exposed to various environments that include temperature extremes and other potential helmet stressors. The goal in such testing is to ensure that the helmets perform up to specifications in a variety of environments. Because the helmets exposed to these environments respond differently to either RTP or BFD, combining the results across all the helmets is not precisely statistically correct. However, given the relatively small observed differences between environmental conditions, it does not appear that this is likely a major contributor to variability.

•   Data from predefined test locations sufficiently characterizes overall helmet performance. As described in Appendix E, helmets are tested in five precise locations, and thus it is implicitly assumed that the results from these five locations adequately describe the performance of the helmet overall. From a process variation perspective, this approach potentially helps minimize testing variation. However, by definition, it also means that not all parts of the helmet are tested, some of which are known to be weaker. For example, the edges of the helmet are not tested, nor are the raised areas of the helmet around the ears. As such, the performance of the helmet in these regions is simply not observed during FAT and LAT.4

Test Threat Projectiles

For FAT, the helmet shell and hardware are tested against a Remington 9-mm, 124-grain full-metal-jacket (FMJ) projectile (DOT&E, 2011), and per the DOT&E protocol, it may be tested against an unspecified small arms threat.5 The helmet is also tested for V50, the velocity at which the helmet is equally likely to stop or not stop an object, such as the following:

•   2-grain right-circular-cylinder (RCC) fragment,

•   4-grain RCC fragment,

•   16-grain RCC fragment,

•   64-grain RCC fragment, and,

•   17-grain fragment simulating projectile (FSP) (DOT&E, 2011).6

The ACH purchase description further specifies minimum V50 velocities for the above RCC and FSP test projectiles (U.S. Army, 2012, p. 13).

As discussed in Chapter 3, there are three general categories of head injury threats: ballistic/fragmentation threats from rapidly moving bullets or fragments; blunt threats from impact into vehicle interiors, the ground, large slow fragments, or other sources of head impact; and blast threats from bombs, artillery, improvised explosive devices, and other explosive sources. Blast and fragmentation threats from explosions historically have been the source of a large majority of U.S. military wounding, while direct gunshot wounds have decreased 46 percent relative to injuries with an explosive source between Vietnam and Operation Enduring Freedom and Operation Iraqi Freedom.

For the DOT&E LAT protocol, the shell and hardware are required only to be tested against the Remington 9-mm, 124-grain FMJ projectile (DOT&E, 2012). The ACH purchase description further requires V50 testing for the 17-grain FSP (U.S. Army, 2012).

4.3 SOURCES OF TEST VARIATION

Variation in test measurement is an unavoidable part of testing. In the ideal testing process, all observed variation in test measures is related directly and perfectly to the items being tested. In industrial quality control parlance, this is referred to as “part-to-part” variation. However, in the real world, the testing process itself also introduces variation into the test measurements. In terms of assessing the quality of an item, this is the “noise” in the testing process. The goal of a good testing process is to minimize these process-related sources of noise. The National Research Council Phase I report (NRC, 2009, p. 12) noted that the “measurement system variance required for a test should be a factor of 10 or better than the total measured variation,” in order to have confidence that differences in the observed measurements predominantly represent part-to-part (i.e., helmet-to-helmet) differences.

 

_________________

4See Chapter 9 for a discussion of assessing helmet performance at other locations during characterization testing.

5Kyle Markwardt, Test Officer, Aberdeen Test Center, “Helmet IOP PED-003 Briefing to NRC Helmet Protocols Committee,” presentation to the committee on March 22, 2013.

6Ibid.

Suggested Citation:"4 Combat Helmet Testing." National Research Council. 2014. Review of Department of Defense Test Protocols for Combat Helmets. Washington, DC: The National Academies Press. doi: 10.17226/18621.
×

Helmet-to-helmet variability includes both variation within and between helmet manufacturers. There are a number of additional sources of variation in the current test process, including the following:

•   Gauge-to-gauge (measurement) variability, which arises when there are accuracy or precision differences within or between the gauges used to measure helmet performance. For helmet testing, the issue of gauge-to-gauge variation is largely associated with the laser used to measure BFD, although it may also arise in other test-range measures such as those related to measuring projectile velocity, yaw, and obliquity.

•   Operator-to-operator variability, which arises when the individuals conducting the test either execute the test differently or interpret test or measurement outcomes differently (or both). For helmet testing, because V0 RTP testing is assessed visually, the operator is the “gauge,” and thus the two types of variation are synonymous in this particular case.

•   Lab-to-lab variability arises when different laboratories conduct helmet ballistic testing. Currently, only the U.S. Army Aberdeen Test Center (ATC) conducts helmet testing, so this type of variation is not applicable at this time, but it could be in the future.

•   Environmental conditions variability arises to the extent that the testing is dependent on environmental conditions such as ambient test range temperature and humidity. Although the current ATC test is conducted in a temperature- and humidity-controlled test range, the temperature and humidity can still vary within specified constraints around nominal values.

•   Projectile velocity and impact variability arise from variation in individual shots. Much of this variability is controlled via the criteria that fair shots must be within certain constraints on velocity, obliquity, yaw, and location, but, as with the environmental conditions, some residual variation remains within the range of the specified constraints.

•   Test item configuration variability could arise in V0 helmet testing if helmet pads and other hardware differ if, for example, the helmet pads are installed in different configurations or if the construction or make-up of the pads themselves differs.

•   Helmet-to-headform stand-off variability arises when one headform size is used to test multiple sizes of helmets. This can result in differential stand-off distances by helmet size, which can affect BFD.

•   Clay variability arises because the clay formulation has changed over time and, as a result of this, the clay now has to be heated in order to achieve historical rheological properties. However, because the clay is now heated, its properties change over time during the test process as the clay cools, and this can affect BFD.

•   Impact location variability arises to the extent that different locations on the helmet respond to the ballistic impacts differently and/or if the order in which the locations are shot affects the test outcome.

•   Environmental testing variability arises when the various environmental conditions to which some of the helmets are exposed (high and low temperature, seawater, etc.) differentially affect the RTP and BFD performance of the helmets, and yet the helmets are combined together for analysis.

The current testing process seeks to control many of these sources of variation via the use of standardized testing procedures, accurate measurement instrumentation, and the like. To the extent physically, analytically, and economically possible, the more these sources of variation are controlled the easier it is to distinguish signal (i.e., differences in helmet performance) from noise (i.e., variation in the testing process).

Of course, testing costs time and money, and there are diminishing returns (and often increasing costs) in the pursuit of increasingly precise test measurements. Furthermore, the required level of measurement precision should be linked to and driven by the overall variation in the testing process where, for example, excessively precise measurements add little value to a testing process that is itself inherently highly variable. Conversely, in any testing process, there should be a precision threshold that any measurement device must meet—again based on the overall variation of the testing process—to ensure that the measurement process itself does not add excessive variability to the test (NRC, 2012). As noted earlier, the previous NRC body armor reports recommend that variance attributable to the test measurement process should be less than one-tenth of the total measured variation (see NRC, 2009, p. 12; NRC, 2012, Appendix G; McNeese and Klein, 1991).

Finding 4-1. Some sources of test variation are relevant to the current helmet testing process while others are not. For example, given that tests are currently conducted only at ATC, lab-to-lab variability is not currently applicable. Similarly, some sources of variation are directly observable with existing data, and some are not. For example, as discussed in Chapter 5, the test data show clear helmet size effects, impact location effects, and minor environmental effects.

Finding 4-2. In the absence of more formal gauge repeatability and reproducibility (R&R) studies, as well as other experimental studies, it is generally not possible to estimate the variation attributed to helmets that actually arises from the other sources of variation listed above, such as the clay, operators, and the laser.

Suggested Citation:"4 Combat Helmet Testing." National Research Council. 2014. Review of Department of Defense Test Protocols for Combat Helmets. Washington, DC: The National Academies Press. doi: 10.17226/18621.
×

The NRC Phase III report on body armor noted the need for a formal gauge R&R study to determine the sources and magnitudes of variation in the test process (NRC, 2012, p. 10). To the best of the committee’s knowledge, such a study has not been done.

Recommendation 4-1. The Department of Defense should conduct a formal gauge repeatability and reproducibility study to determine the magnitudes of the sources of test variation, particularly the relative contributions of the various sources from the testing methodology versus the variation inherent in the helmets. The Army and the Office of the Director, Operational Test and Evaluation, should use the results of the gauge repeatability and reproducibility study to make informed decisions about whether and how to improve the testing process.

4.4 ADDITIONAL MEASUREMENT AND TESTING ISSUES

Without delving into the specific details of the DOT&E FAT and LAT protocols here (see Chapters 5-7), there are two additional BFD measurement and testing issues of note: the use of clay as a BFD recording medium, and headform impacts on the measurement of BFD.

Clay as a Recording Medium

As described in the Phase III report (NRC, 2012), there is not much that is known about the use of clay as an impact recording medium, including how accurately it records the backface signature of an impact and how much variation it adds to the testing process. Thus it is unclear if the use of clay is appropriate for helmet testing, particularly because “the mechanical backface response of the head surrogate may govern both penetration and impact tolerance portions of the test” (NRC, 2012, p. 152).

One of the critical issues with the current clay (Roma Plastilina #1), as first noted in the NRC Phase II report (NRC, 2010), is that the clay is time and temperature sensitive in that, as Figure 4-1 shows, its properties can change significantly over a 45-minute period as it cools. These effects are likely to affect BFD measurements.

The previous body armor committees studied many of the issues related to clay (NRC, 2012, 2010), and a detailed examination of these issues is beyond the scope of this committee’s charge. But the committee notes that, purely from a testing process perspective, it is important to minimize this source of variation in the testing process. In particular, the Phase III body armor report recommended that DOT&E and the Army expedite the development of a replacement for the current Roma Plastilina #1 clay that can be used at room temperature (NRC, 2012). The committee notes that successful completion of this effort has the potential to remove a significant source of testing variation and thus greatly improve the testing process.

image

FIGURE 4-1 Clay time and temperature effects in the column drop test. Each line represents the results of repeated column drop tests on a standard clay box, each of which was subject to different environmental conditioning. Measurements were taken at times 3, 18, 33, and 48, and the lines on the graph are linear interpolations between the observed results at those time points. The graph shows that the depth of penetration systematically decreases over time as the clay cools. (See Appendix E for a description of the column drop test.) SOURCE: NRC (2010).

Headforms

Army helmet testing is currently based on the ATC headform—derived from the National Institute of Justice headform discussed in Chapter 3—with slots in the coronal and midsagittal directions (Figure 4-2). As more fully described in Appendix E, the slots in the headform are packed with clay as the recording medium for both penetration and BFD. There is currently one headform size, although there may be up to six helmet sizes (depending on the type of helmet).

Two major issues with the headform may compromise its ability to appropriately and consistently measure BFD. First, the petals may impede the BFD of the helmet, which could result in under-measurement of the actual ballistic transient deformation of the helmet. Second, as previously discussed, with only one headform size, the stand-off distances may vary by helmet. Large helmets likely have a larger stand-off distance, whereas small helmets likely have to be forced onto the headform with minimal stand-off.

The Army is developing five new “sized” headforms that will have a constant helmet shell-to-headform standoff distance for the Advanced Combat Helmet.7 As illustrated in Figure 4-3, the motivation with the new sized headforms is to eliminate one source of variation in helmet testing that arises because different sizes of helmets interact with the current single-size headform in different ways.

 

_________________

7James Zheng, Chief Scientist, Soldier Protective and Individual Equipment, PEO Soldier, “Helmet Testing, Related Research & Development,” presentation to the committee on March 22, 2013.

Suggested Citation:"4 Combat Helmet Testing." National Research Council. 2014. Review of Department of Defense Test Protocols for Combat Helmets. Washington, DC: The National Academies Press. doi: 10.17226/18621.
×

image

FIGURE 4-2 Aberdeen Test Center headform. SOURCE: NRC (2012).

Finding 4-3. The implementation of new “sized” headforms by the Army represent an improvement in the helmet testing process because the stand-off between helmet and headform will be the same for all helmet sizes.

The committee notes that these headforms were “reverse engineered” from the existing helmets so that the stand-off distances would all be exactly the same. It is not clear how anthropomorphically correct the new headforms are or how closely they reflect the actual needs of soldiers and marines.

Recommendation 4-2. For future helmet development and testing efforts, the Department of Defense should assess the importance of using anthropomorphically correct headforms (as well as any other ballistic test dummies) based on head sizes and proportions that appropriately characterize the population that will wear the helmet.

The “Peepsite”8 headform (Figure 4-4) was developed by the U.S. Army Research Laboratory to avoid the drawbacks of the ATC headform, in particular, that the clay used to measure BFD is located in between four solid aluminum parts of the headform.

As described NRC (2012), the ATC headform has three potential problems. The first is that the solid aluminum petals constrain the flow of the clay during impact, which may result in a smaller BFD than otherwise would have occurred. The Peepsite headform reduces this possibility by eliminating the metallic petals near the impact location.

The second potential problem is that helmet backface contact can span the aluminum petals, either preventing further impact or altering the BFD response and backface signature recorded in the clay. As with the first problem, the lack of petals in the Peepsite headform eliminates the potential for this type of helmet-headform interaction, which may alter helmet backface response.

The third potential problem arises because the clay and helmet have very different temperature characteristics. Using the current Roma Plastilina #1 clay, the clay is heated above room temperature to achieve the desired rheological behavior. Testing on the Peepsite headform, however, is done at room temperature, which means that the rate of cooling of the clay and the aluminum headform will be different, resulting in thermal gradients and residual strains and stresses in the clay that may affect the impact event (NRC, 2012).

NRC (2012) noted that the Peepsite headform reduces the potential for a number of problems with the existing ATC headform. It further recommended that the Army should investigate the use of the Peepsite headform for use with the new room-temperature clay. That report indicated that the headform has the potential to improve testing compared to the ATC clay headform using clay at elevated temperatures.

4.5 REFERENCES

CFR (Code of Federal Regulations). 2013. 48 CFR part 246–Quality Assurance. http://cfr.regstoday.com/48cfr246.aspx Accessed April 1, 2013.

DoD (Department of Defense). 1987. Department of Defense Test Method Standard: V50 Ballistic Test for Armor. MIL-STD-662F. U.S. Army Research Laboratory, Aberdeen Proving Ground, Md.

DOT&E (Director of Operational Test and Evaluation). 2011. Standardization of Combat Helmet Testing. Memorandum from J. Michael Gilmore, Director. September 20, 2011. Office of the Secretary of Defense, Washington, D.C. [reprinted in Appendix B]

DOT&E. 2012. Standard for Lot Acceptance Ballistic Testing of Military Combat Helmets. Memorandum from J. Michael Gilmore, Director. May 4, 2012. Office of the Secretary of Defense, Washington, D.C. [reprinted in Appendix B]

FAR (Federal Acquisition Regulations). 2013. Federal Acquisition Regulations, Subpart 9.3, Paragraph 9.302. First Article Testing and Approval. http://www.acquisition.gov/far/current/html/Subpart%209_3.html. Accessed March 30, 2013.

McNeese, W., and R. Klein. 1991. Measurement systems, sampling, and process capability. Quality Engineering 4(1):21-39.

NRC (National Research Council). 2009. Phase I Report on Review of the Testing of Body Armor Materials for Use by the U.S. Army: Letter Report. The National Academies Press, Washington, D.C.

NRC. 2010. Testing of Body Armor Materials for Use by the U.S. Army—Phase II: Letter Report. The National Academies Press, Washington, D.C.

NRC. 2012. Testing of Body Armor Materials: Phase III. The National Academies Press, Washington, D.C.

 

_________________

8The “Peepsite” headform was developed at the Army Research Experimental Facility Peep Site Range 20 at Aberdeen Proving Ground, Md.

Suggested Citation:"4 Combat Helmet Testing." National Research Council. 2014. Review of Department of Defense Test Protocols for Combat Helmets. Washington, DC: The National Academies Press. doi: 10.17226/18621.
×

image

FIGURE 4-3 New Army “sized” headforms. SOURCE: James Zheng, Chief Scientist, Soldier Protective and Individual Equipment, PEO Soldier, “Helmet Testing, Related Research & Development,” presentation to the committee on March 22, 2013.

image

FIGURE 4-4 Peepsite headforms: five headforms, one for each shot direction. SOURCE: Robert Kinsler, Survivability/Lethality Analysis Directorate, Army Research Laboratory, “The Peepsite Headform,” presentation to the committee on January 24, 2013.

 

Prather, R., C. Swann, and C. Hawkins. 1977. Backface Signatures of Soft Body Armors and the Associated Trauma Effects. ARCSL-TR-77055. U.S. Army Armament Research and Development Command Technology Center, Aberdeen Proving Ground, Md.

U.S. Army. 2012. Advanced Combat Helmet (ACH) Purchase Description, Rev A with Change 4. AR/PD 10-02. Soldier Equipment, Program Executive Office—Soldier, Fort Belvoir, Va.

Suggested Citation:"4 Combat Helmet Testing." National Research Council. 2014. Review of Department of Defense Test Protocols for Combat Helmets. Washington, DC: The National Academies Press. doi: 10.17226/18621.
×
Page 25
Suggested Citation:"4 Combat Helmet Testing." National Research Council. 2014. Review of Department of Defense Test Protocols for Combat Helmets. Washington, DC: The National Academies Press. doi: 10.17226/18621.
×
Page 26
Suggested Citation:"4 Combat Helmet Testing." National Research Council. 2014. Review of Department of Defense Test Protocols for Combat Helmets. Washington, DC: The National Academies Press. doi: 10.17226/18621.
×
Page 27
Suggested Citation:"4 Combat Helmet Testing." National Research Council. 2014. Review of Department of Defense Test Protocols for Combat Helmets. Washington, DC: The National Academies Press. doi: 10.17226/18621.
×
Page 28
Suggested Citation:"4 Combat Helmet Testing." National Research Council. 2014. Review of Department of Defense Test Protocols for Combat Helmets. Washington, DC: The National Academies Press. doi: 10.17226/18621.
×
Page 29
Suggested Citation:"4 Combat Helmet Testing." National Research Council. 2014. Review of Department of Defense Test Protocols for Combat Helmets. Washington, DC: The National Academies Press. doi: 10.17226/18621.
×
Page 30
Suggested Citation:"4 Combat Helmet Testing." National Research Council. 2014. Review of Department of Defense Test Protocols for Combat Helmets. Washington, DC: The National Academies Press. doi: 10.17226/18621.
×
Page 31
Next: 5 Helmet Performance Measures and Trends in Test Data »
Review of Department of Defense Test Protocols for Combat Helmets Get This Book
×
Buy Paperback | $54.00 Buy Ebook | $43.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Combat helmets have evolved considerably over the years from those used in World War I to today's Advanced Combat Helmet. One of the key advances was the development of aramid fibers in the 1960s, which led to today's Kevlar-based helmets. The Department of Defense is continuing to invest in research to improve helmet performance, through better design and materials as well as better manufacturing processes.

Review of the Department of Defense Test Protocols for Combat Helmets considers the technical issues relating to test protocols for military combat helmets. At the request of the DOD Director of Operational Test and Evaluation, this report evaluates the adequacy of the Advanced Combat Helmet test protocol for both first article testing and lot acceptance testing, including its use of the metrics of probability of no penetration and the upper tolerance limit (used to evaluate backface deformation). The report evaluates appropriate use of statistical techniques in gathering data; adequacy of current helmet testing procedures; procedures for the conduct of additional analysis of penetration and backface deformation data; and scope of characterization testing relative to the benefit of the information obtained.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!