The committee was tasked to provide findings on the best instrumentation and procedures for measuring backface deformation (BFD) in clay. Accordingly, this chapter discusses relevant criteria for test instrumentation and procedures, including fixed and variable costs, precision and accuracy, and human operator considerations.
It is informative to review past events to learn how the instrumentation and measurements of BFD relate to the overall methodology of the original animal tests, clay selection, selection of performance specifications, and instrumentation measurements. These conceptual steps give some direction for improvements of testing procedures. It may be noted that in most experimental studies and scientific measurements, there are several conceptual and scientific stages that need to be considered and followed: (1) conceptualization of which phenomena or parameters must be measured, (2) the validity and completeness of the design of the experiment or measurement protocol to ensure a complete and accurate data set that eliminates outside variables, and (3) statistics associated with the measurements, including instrumentation accuracy vs. required accuracy. The six conceptual steps that follow trace the development of BFD measurement specifications.
Animal experiments with goats indicated that indents of 40 to 50-mm were the maximum acceptable value. Specifically, from Chapter 3 in this report, 44-mm deformation in the goats was correlated with ~10 percent probability of death from the impactor, similar to the initial program requirement of less than or equal to 10 percent lethality. This deformation level in the goats was below the lowest value for which any of the goats died in the much lower velocity impactor tests. As such, this depth was selected as the injury reference value in the clay.
This selection implies that a 44-mm goat impactor deformation is similar to a 56-mm deformation in clay or gelatin. Conversely, a 44-mm deformation in the clay is similar to 34-mm deformation in the goat for this hard impactor, as shown in Figure 3-5. (In Figure 3.5, the fatality risk at 34 mm of deformation in
goats, or 44 mm of deformation in clay, is approximately 4 percent.) It was reasonable to assume that use of the blunt impactor tests to model the injury behavior from behind-armor body trauma was “likely” conservative for this impact velocity range. However, outside the velocity range typical of handgun rounds (i.e., 240 m/sec) into soft body armor, the relationship between injury and deformation response in the clay is much less certain. More recent measurements in sheep using velocities of about 800 m/sec showed significant lethality even with 34-mm deep indents (cf. Chapter 8 and Gryth et al., 2007).
Clay BFDs were tailored to mimic such an indent. From Chapter 4 in this report, and as introduced in Chapters 2 and 3, the Roma Plastilina #1 (RP #1) modeling clay backing material used in armor testing has two important purposes. The first is to simulate the tissue response beneath the point of impact so that ballistic data generated in laboratory tests can be correlated to effects seen on the human body. The second purpose is to denote the extent of BFD during ballistic testing (Prather et al., 1977).
Multiple materials are available to simulate a body; in fact, at the time it was introduced, modeling clay was recognized to only approximate tissue response, and empirical correlations were needed to develop a probability for lethality or injury. The chief advantage of modeling clay over other materials available at the time was that it better served the function of recording BFDs; that is, when impacted, modeling clay deforms plastically, and a permanent cavity (also termed “indent,” “impression,” or “crater”) is developed under the point of impact. Correlations were developed between the geometry of the cavity and the probability of lethal injury.
The U.S. Army Aberdeen Test Center (ATC) has set the maximum acceptable BFD value at 44 mm for body armor plates tested using clay. This value appeared reasonable based upon the past measurements. As noted in Chapter 3, the Army does not have the medical outcomes to know whether 44 mm is a conservative value.
Measurement instruments were used to verify the test results, as directed by the procurement specifications. Digital calipers and then laser-based instruments were used to better measure the BFD under nonideal conditions (i.e., offset and side/edge indents) (Walton et al., 2008). However, different instruments may give different BFD readings due to each instrument having
different measurement precision and different measurement accuracy that is dependent upon the measurement scenario.
Two steps have taken place since the National Research Council (NRC) was tasked to study BFD measurement techniques:
Step Five: The office of the Director, Operational Test & Evaluation (DOT&E) developed a statistically based protocol and test processes, including measurement techniques, for first article and lot acceptance testing of hard body armor (DOT&E, 2010). (See Chapter 6.) Step Six: The NRC committee examined data related to the precision and accuracy of different measurement instruments, under different measurement scenarios, to gain insights into which instruments or usage might meet or exceed the precision and accuracy required to measure BFD. The different instruments and different measurement scenarios are covered in this chapter, and statistical considerations are presented in Appendixes G and M.
It is informative to keep in mind that the above conceptual steps were made to address four overall questions:
(1) How well do the testing procedure and measurements of the BFD quantify the probability of lethality being measured?
(2) Is the measurement a complete and scientifically valid measurement set that eliminates outside variables—e.g., the design of experiment or measurement procedure?
(3) How well must the BFD be measured?
(4) What are the statistical accuracy and precision of the measurements?
When discussing body armor testing and, particularly, the equipment required in the conduct of adequate tests, the question arises: How well must the BFD be measured? Put another way, what are the limits of acceptable error in BFD measurement? To answer this question, it is first necessary to define a number of terms.
The Phase I report (NRC, 2009) discussed the difference between the accuracy and the precision of a measuring device. Although the two terms are often used interchangeably and considered synonymous in colloquial use, they have quite different technical meanings. Accuracy is the closeness of a measured quantity to its actual (true) value. Precision is the closeness of agreement between
measured values obtained by replicate measurements under specified conditions (NRC, 2009).
A measurement device is valid if it is both accurate and precise. However, a device can be precise but not accurate, accurate but not precise, or neither accurate nor precise. If a measurement device has a systematic error, then increasing the number of samples (the sample size) increases precision but does not improve accuracy. On the other hand, eliminating the systematic error improves accuracy but does not change precision.
We often quantify precision in terms of either the standard deviation or expanded uncertainty (twice the standard deviation of the repeated values) of the measurement device. Accuracy or bias is estimated as the difference between the average of a large number of repeated values under a specific set of conditions and the true value. Beginning with bias, the ideal measurement device for BFD should have no bias. That said, a biased measuring device may be acceptable in armor testing if it is consistently biased across all possible plate and test configurations. Under these conditions, the BFD test standard can simply be increased or decreased to account for the bias. However, determining that a device is consistently biased is likely to be difficult at best and unlikely to be true in practice.
Appendix G demonstrates that there are diminishing returns (and probably increasing costs) in the pursuit of ever more precise measuring devices. This result follows from the fact that the necessary level of measurement precision is a function of the overall variation in the testing process, where, for example, highly precise measurements add little value to a testing process that is itself inherently highly variable. Conversely, in any testing process, there should be a precision threshold that any measurement device must meet—again based on the overall variation of the testing process—to ensure that the measurement process itself does not add to the variability arising from the test.
For the current clay-based test methodology, the results from the Appendix G analysis suggest that a precision threshold of 0.5 mm (i.e., a standard deviation) is necessary to ensure that the measurement device does not add any appreciable variation to the body armor testing process. This value is consistent with subject matter experts who expressed to the committee their intuition that measurement precision on the order of 0.5 mm is sufficient for the current testing process. It is also consistent with injury “effect size” calculations done by the committee. It is somewhat larger than the heuristic suggested in the Phase I report (NRC, 2009) that the measurement system variance required for a test should better by a factor of 10 or more than the total measured variation (McNeese and Klein, 1991).
The original procurement specifications for body armor plates state that the BFD shall be measured with an instrument that has an accuracy of ±0.1 mm. However, the detailed analysis presented in Appendix G indicates that this value (and the accompanying wording) was probably too stringent (and somewhat ambiguous in its use of the term “accuracy”), and that a more reasonable value for an instrumentation precision of about 0.5 mm would be sufficient to correctly test and detect statistically significant effects.
Finding: Given the current clay variation, a measurement precision (standard deviation) of 0.5 mm is sufficient; instruments featuring greater precision add little practical value to the testing process. Future improvements in the inherent variability of the backing material will require instruments that are correspondingly more precise.
This section covers instrumentation that has been and is being used for BFD measurements. In particular, three instruments have been used: the Coordinate Measuring Machine (CMM), used as a reference instrument; the digital caliper; and a Faro laser scanning probe system. A CMM costs about $500,000; the Faro system, about $150,000; and the digital caliper, about $200. The three systems were used in extensive measurements and tests as reported in the Walton et al. (2008) report.
The CMM is a Wenzel X-orbit Bridge type with both digital point probes and LDI laser scanner (Model SLP250); this system is has a precision of ±0.35 mm (0.00035 mm) and an accuracy of 10 mm (0.01 mm). It was considered by ATC to be highly precise and accurate and yields measurement results that can serve as a “true” value (Walton et al., 2008). The two systems that have been used to measure BFD indents made in clay during body armor testing are the digital caliper and the Faro laser.
The digital caliper (referred to as the “caliper”) was used for many years as a low cost, point-to-point instrument to measure BFD depths. This was accomplished by using a manually operated depth probe integrated with an electronic digital display and paired with a bridge gauge to provide a stable base for measurement. The operator affixes a bridge gauge to the side of the box that holds the body armor and underlying clay recording medium. A baseline preshot measurement is made with a digital caliper to the point of aim where the test bullet is expected to strike the armor. The bridge gauge and caliper are removed. The test firing is conducted. The bridge gauge is replaced on the box and a measurement of the postshot BFD is made with the digital caliper. The deepest point in the BFD crater is visually located by the operator, and one reading with the caliper is taken at that point; in this case, an operator with experience and judgment is required for an accurate and consistent measurement. The two types of caliper instruments used are shown in Figure 5-1.
FIGURE 5-1 Digital calipers used in armor testing. The ATC standard caliper with the small end (3 mm) is shown at the top. The caliper used by commercial testers (H.P. White Laboratory and Chesapeake Testing) with a large 19-mm tip is shown at bottom. The dimension of the wide tip (19 mm) was measured by Chesapeake Testing at the request of the committee. (The center caliper is not used.) SOURCE: Courtesy H.P. White Laboratories.
The Office of the Director, Operational Test and Evaluation (DOT&E) has designated the Faro Quantum Laser Scan Arm and Geomagic Qualify software for hard and soft body armor (referred to as “the Faro”) as the device to measure BFDs (DOT&E, 2010). Laser profilometry, as used by the Faro scanning laser instrument, employs the commonly used principle of optical triangulation. A laser generates a collimated beam, which is then focused and projected onto a target surface. A lens reimages the laser spot formed on the surface of the target onto a charge-coupled device, which generates a signal that is indicative of the spot’s position on the detector. As the height of the target surface changes, the image of
the laser spot shifts owing to parallax. To generate a three-dimensional image of the specimen’s surface, the sensor scans in two dimensions, generating a set of noncontact measurements that represent the surface topography of the specimen under inspection. The data are then used to compute the three-dimensional geometrical profile of the surface, with readings essentially continuous over the scanned region. Thus, the laser scanner produces a series of measurements over the whole surface of the clay, as opposed to the single reading obtained with the digital caliper. In addition, a laser scanning system has the ability to acquire substantial quantities of inspection data. Figure 5-2 shows the Faro Quantum Laser Scan Arm.
The precision and accuracy of instruments depend to a great degree on the associated operating procedures and on the skill of human operators. This section describes the “art of measurement” in testing procedures as observed by the committee.
A number of practical human operator considerations have an impact on the measurement differences and variations associated with all measuring systems. These include subjective differences in human handling, process transparency, and the selection and settings of software.
Operator-to-Operator (Human Handling) Variability
The operator-to-operator (human handling) variability associated with both measuring devices appears to be generally different. However, Walton et al. (2008) suggest that operator variability for the Faro is 0.041 mm and operator/caliper variability for the digital caliper is an order of magnitude greater, at 0.471 mm (Walton et al., 2008).
Members of the committee interviewed operators at ATC and at two commercial testing companies.26 They learned a couple of things. The caliper end makes contact with the clay. The operator must use judgment to determine the deepest point in what may be a complex BFD and then carefully and manually push the caliper arm down so it just touches the surface of the clay but does not dent the clay. Operators state that making precise measurements is an art. Variation among operators can be 0.1-0.3 mm when measuring the same BFD in the center of an armor plate. This variation was actually observed at ATC when three different operators measured the same BFD using a digital caliper.
The Faro is a noncontact device. The operator must use judgment as to where and how fast to “paint” the armor on a prefire event (to digitally capture the surface of the armor) and similarly how fast to paint the BFD area in a postfire event. The computer program compares the pre- and postfire digits and calculates the maximum BFD. According to experienced operators, variation due to these judgmental factors can result in measurement differences among Faro users of 0.1-0.2 mm for the same BFD. Similar differences in results were observed during a demonstration to the committee when three different operators measured the same BFD with the Faro.27
26Site visits to the ATC, H.P. White Laboratory, and Chesapeake Testing by members of the committee on August 30 and 31, 2010.
27Variations were reported by Faro operators at the ATC and commercial testing sites. Variations were then observed by the committee during demonstration at the ATC, August 31, 2010.
Testing protocols should anticipate that anomalous data can occur for any number of reasons and should include procedures to ensure data quality. These protocol procedures can provide a means for operators to quickly confirm that a measurement outside a predetermined upper and lower bound is not due to a major equipment or software malfunction. The committee notes that great caution is warranted if this idea is implemented because it has the potential to lead test operators to focus on measurement differences that are the result of noise and not actual differences.
There is a software variability associated with the Faro resulting from a software selection that allows for smoothing the raw digital data captured by the Faro. For example, the committee was shown that a change from one level of smoothing to another resulted in a 1-mm difference in the BFD measurement for the same cavity.28 Two software settings are in use, one for 0.7 mm and one for 1.5 mm spatial resolution. An ATC manager stated that ATC testers tended to use the most conservative setting (i.e., higher spatial resolution of 0.7 mm), which will result in the largest BFD measurement to ultimately protect soldiers.29 Manufacturers, on the other hand, feel that their armor may be unfairly penalized due to judgment decisions that depend on the smoothing setting the operator chooses.30
In statistical and testing terms, the choice of the smoothing setting directly affects the accuracy of the Faro. That is, the choice of smoothing settings can introduce a systematic bias into the measurements, a bias that can make the test either harder or easier to pass depending on whether the bias results in systematically larger or smaller BFDs. As discussed in Appendix G (Key Point 4), an overly conservative setting on the Faro laser resulting in high spatial resolution may result in a design penalty that is roughly five times larger than the design space improvement achieved via better measurement precision. Thus, unless care is taken to understand the effect of the software smoothing algorithms on the indent measurement, any gains in precision achieved by using the Faro could be more than offset by a systematic bias. This result might not only make the test harder for manufacturers to pass but also might result in heavier armor if manufacturers are driven to make plates heavier to compensate for a measurement bias.
The committee considers the National Institute of Standards and Technology (NIST) to be an excellent third-party source of expertise on measurement instruments and standards, because the NIST has provided significant support to both DOT&E and the Army Program Executive Officer Soldier on both body armor testing and body armor design issues in the past.
28As observed by committee members during their site visit to the ATC, August 31, 2010.
29Discussion with Irene Johnson, ATC, during site visit, August 31, 2010.
30David Reed, President, North American Operations, Ceradyne, Inc., “Pragmatics of Body Armor Testing—Manufacturer’s View,” presentation to the committee, August 9, 2010.
Recommendation 5-1: An organization such as the National Institute of Standards and Technology should conduct a controlled study to determine the most reasonable and consistent Faro smoothing settings to be used while measuring backface deformations (BFDs) in body armor testing. Similarly, any other software selections that could cause relevant changes to BFD measurements should be studied. Corresponding values for the precision and accuracy of each software setting will need to be quantified.
Sometimes the deepest penetration in the clay and the initial bullet aim point are offset by a small distance. This affects both accuracy and precision of the instrumentation measurements. Operators of the caliper calibrate their instrument on the aim point but move the caliper to measure the deepest point of impact when the aim point and deepest penetration do not align, which happens frequently. The caliper measurement procedure disregards the curvature of the plate and tends to overestimate the depth of the BFD. This correction and offset value can be large and is the result of having only one preshot reading for the plate surface. As a mathematical correction for the offset, an operator referred the committee to an equation for offset contained on an ATC test instruction sheet.31 Government and commercial operators alike felt that the equation was imprecise and would likely lead to an underestimation of BFD. Also, the equation does not make provision for the offset being changed to a positive number if the deepest point is on the upside of the aim point on an edge shot.
In comparison to the caliper, the Faro takes into account the curvature of the plate, calculates the geometry, and reduces offset errors. This computational capability allows the Faro to measure and calculate an offset value with high precision and leads to a more accurate measurement of the maximum indent.
As described in Chapter 4, there is significant variability in the RP #1 modeling clay that has been used for decades as the backing material in the testing process.
- The RP #1 modeling clay was and continues to be designed for artists and not for the ballistics testing community. As a result of requests from artists for a certain feel and other characteristics the formulation has changed over time. From the standpoint of the ballistics testing community, the clay over time has been allowing less deformation than the original RP #1.
31The equation used is Offset = -0.25 × BFD. The test instruction sheet was shown to members of the committee during site visit to the ATC, August 31, 2010.
- To compensate for the change in formulation, the testing community has had to warm the clay in ovens to achieve the calibration numbers required by National Institute of Justice (NIJ) standards. Heat introduces significant variation.
- The amount of manual working that is performed using mallets to initially pound the clay bricks into the testing box or recondition the clay after a test shot introduces additional variability in clay deformation. As described in Chapter 4, some studies indicate that this human dynamic of working might introduce even more variability in the deformation of clay measurements than changes in temperature.
- It has been observed over the years of body armor testing that clay in a box used for testing has a limited useful life. Since old clay may result in unreliable deformations during testing, both government and commercial testers routinely discard clay before it is a year old. Although variability due to aging appears to be less than variability introduced by temperature and working, it is one more indicator of the significant variability that is inherent in RP #1.
Owing to the above and other considerations, the NIJ standard allows for significant modeling clay variation. Specifically, to determine if the modeling clay is ready for testing it must calibrate to a specification of 25 mm ± 3 mm. In other words, 6 mm of overall clay variability is accepted as, and perhaps understates, the noise in the testing process related to clay.
A great deal of variation is introduced into the measurement system by RP#1. As discussed in Chapter 4, there is much merit in reducing the variability in the recording medium, because with less variability in the recording medium testers can more fairly state that the differences in test results are related to plate behavior. One battlefield payoff will be greater confidence that the plates will function successfully in combat. Another is the possibility that lighter weight plates will be able to pass the tests, ultimately reducing the weight burden on soldiers.
Some additional variation occurs as a result of the ammunition that is used during testing. If, for example, a tester was to replicate the real-world threats that a soldier might face, that tester would use ammunition procured from third-world countries. Such ammunition may not have been manufactured to specifications that result in consistent velocity and bullet mass from round to round. Variation in velocity will cause some variation among BFDs that are created during testing, because the energy is proportional to the velocity squared. Bullet velocity measurements are part of ATC testing procedures and should be part of all live-fire tests. Within one batch, manufactured small arms have a velocity variation leading to a 12 percent difference in deposited energy.
Variability can also result from human handling or subjective software selection within the measurement systems. As discussed previously, such variability can result in different measurements for the same BFD.
An important issue that should be addressed is the importance of having a measurement standard to determine the ability of any given device (caliper, Faro, etc.) to precisely measure a representative BFD regardless of the organization, measurement instrument, software version, operator, and so forth.
In the development of methods to measure BFDs, virtually no inter-laboratory testing has been carried out to date to determine sources of inter-laboratory errors as a consequence of test procedure differences or differences in the setup of the test equipment or of differences in the operation of equipment. Interlaboratory errors are often systematic, resulting in a constant statistical difference in BFD measurement from one measurement laboratory to the next. These measurement errors can lead to undue acceptance or rejection of lots of ceramic armor, which is undesirable. Interlaboratory errors can be rooted out by having several laboratories run the same test with the same or equivalent instrumentation. The source of the error can be identified and eliminated by a change in experimental procedure or equipment.
A physical artifact would replicate a standard BFD cavity and perform a gauge block function for noncontact instruments. That said, the BFD standard artifact should be more than just a gauge block. Rather it should represent a physical model of the complete BFD measurement process. It should allow operators to follow a four-step process:
- Measure a representative preshot surface;
- Measure a representative postshot BFD crater;
- Subtract the two numbers; and
- Compare the number from the previous step with the artifact’s standard depth.
The result would quickly determine if the device as used was sufficiently accurate for this application. For example, a complete artifact system could be made that mimics the preshot surface with a flap that covers a replicate BFD crater. Such a model could be made of hard plastic, or, a softer coating could be applied. While the thickness of the flap would affect the absolute readings, the relative readings between organizations and operators would not be affected. A single artifact system, upon acceptance by NIST and the testing community, would become the one national standard for quickly confirming a device’s precision and accuracy for measuring a BFD.
Previous work by NIST has established the usefulness of such a standard (NIST, 2010). The committee supports turning this idea into a practical solution for the entire body armor testing community. In addition to the test standard just described, evaluation of interlaboratory test variation is important for establishing test reliability. Hence, interlaboratory tests should be run in order to establish the accuracy of a test as well as its precision.
Recommendation 5-2: An organization such as the National Institute of Standards and Technology should develop a standard backface deformation artifact system and procedures to allow operators to ensure that different measurement devices at different locations are able to meet specified levels of accuracy and precision.
Based on the preceding discussion of the instruments and procedures for measuring BFD, the committee developed criteria for a measuring device that would provide the “best utility” for the body armor testing applications. A best utility measurement device must meet the following criteria:
- Meet or exceed precision and accuracy requirements for measuring BFD;
- Achieve the lowest practical fixed and variable costs; and
- Minimize human judgment and error.32
In addition, it would be advantageous if the instrument could also
- Be versatile enough to measure indents behind both plates and helmets33 and
- Be widely available and supportable here and abroad.
An example of an instrument that will have potential in the future is being tested at the Army Research Laboratory.34 The Microscribe Model G2LX is a digital arm/mechanical scribe instrument that is being used by the Army Research Laboratory in research on the BFD cavities formed in the head forms used to test helmets.
The G2LX, which costs approximately $8,500,35 has an advertised precision of 0.012 in. (0.3 mm). The system is connected to a computer that can capture measurements made by the operator based on a three-dimensional x, y, and z coordinate system. It also has an automated database that captures
32Capturing measurement readings in an automated database is helpful. Expert testing operators who spoke with the committee agreed that manually capturing readings can lead to transposition and other errors. There are commercially available automated database interfaces for both contact and noncontact instruments.
33See Chapter 7 for a description of the helmet testing process. The differences between armor plate testing and helmet testing are considerable, and all operators interviewed agreed that a laser-based measuring tool was generally preferred for helmet testing due to the complex curves of the head form, on which the helmet BFD measurements must be made.
34Committee site visit to the ATC, August 31, 2010; Rob Kinzler, Army Research Laboratory, “Improvements in Helmet Measurement,” presentation, to the committee, October 13, 2010.
measurements made by the user. The user can activate a finger or foot switch to notify the computer to enter the current measurement into the database.
During a demonstration to the committee, the time required to measure a clay indent appeared to be less than that required by a caliper since there is no need to set up a bridge gauge. The G2LX system is a basic one-point contact measurement system, which means it suffers from the same inability to compensate for offset as does the caliper.36 The system combines a fairly inexpensive robotic arm capability, similar to that of the Faro system, with an inexpensive hard-mount caliper scribe end.
The MicroScribe system could be used for testing both body armor plate and helmet BFD measurements (although a finer grid pattern is needed for helmet testing) and could significantly reduce the offset error currently seen with the caliper, which uses one preshot measurement.
MicroScribe offers a more sophisticated arm advertised to achieve a precision of 0.003 in. (0.0762 mm) on the upgraded MLX model; it costs approximately $22,000. The robotic arm can be outfitted with a laser scanner similar to that used on the Faro for an additional $15,000 or so. The MicroScribe system is just one example of instruments that are available in the commercial sector. The committee believes there are also others that have “best utility” characteristics and are readily available.
Finding: The data available to the committee were not obtained through a formal gauge or “artifact standard” repeatability and reproducibility study by an independent agency. Thus, the committee can draw no quantitatively reliable conclusions about the precision and accuracy (potential biases) of the measurement systems it examined.
Late in the course of the committee’s final deliberations as it prepared this report, it received additional test results data that had not been available to it earlier in the effort (see Appendix M). Considering all available data, the committee recognized (1) an insufficiency of sample sizes for all the data examined; (2) inconsistencies in the direction and magnitude of biases; and (3) presumed large differences in offset magnitudes between the data in Hosto and Miser (2008) and the more recent live-fire test experiences.
The committee determined from its analysis of the available data that remedial procedures for properly designed evaluations are needed to determine the magnitudes of accuracy and precision of current or proposed instruments in the measurement of body armor BFD before definitive conclusions can be drawn regarding best utility.
Recommendation 5-3: In anticipation of future test measurement requirements, the Office of the Director, Operational Test and Evaluation and/or the Army
36The offset measurement problem could be overcome by having the operator enter several point measurements on the surface of the clay near the aim point prior to the test round being fired. The extent of the grid pattern (e.g., 3 × 3 vs. 4 × 4 grid) would depend on the accuracy of the BFD measurement that was needed.
should charter an organization such as the National Institute of Standards and Technology to conduct an analysis of available candidate commercial instruments with inputs from vest users, manufacturers, testers, policy makers, and others. The goal is to identify one or more devices meeting the characteristics of “best utility” measuring instruments as defined in this study to the government, industry, and private testing labs.
The list of best utility instruments should be shared with NIJ, international allies, and others, as appropriate, to promote measuring instrument standardization for body armor testing nationally and internationally. A formal gauge repeatability and reproducibility study is required to quantify accuracy and precision as inputs to the best utility analysis.
DOT&E. 2010. Memo on Standardization of Hard Body Armor Testing, 27 April 2010, Director, Operational Test and Evaluation.
Gryth, D., D. Rocksen, J. Persson, U. Arborelius, D. Drobin, J. Bursell, L. Olsson, and T. Kjellstrom. 2007. Severe Lung Contusion and Death After High-Velocity Behind-Armor Blunt Trauma: Relation to Protection Level. Military Medicine 172(10): 1110-1116.
Hosto, J. and C. Miser. 2008. Quantum FARO Arm Laser Scanning Body Armor Back Face Deformation. Report No. 08-MS-25. Aberdeen, Md.: U.S. Army Aberdeen Test Center Warfighter Directorate, Applied Science Test Division, Materials and Standards Testing Team.
McNeese, W., and R. Klein. 1991. Measurement systems, sampling, and process capability. Quality Engineering 4(1): 21-39.
NIST (National Institutes of Standards and Technology). 2010. Dimensional Metrology Issues of Army Body Armor Testing. Gaithersburg, Md.: NIST.
NRC (National Research Council). 2009. Phase I Report on Review of the Testing of Body Armor Materials for Use by the U.S. Army: Letter Report. Washington, D.C.: National Academies Press.
Prather, R., C. Swann, and C. Hawkins. 1977. Backface Signatures of Soft Body Armors and the Associated Trauma Effects. ARCSL-TR-77055. Aberdeen Proving Ground, Md.: U.S. Army Armament Research and Development Command Technology Center.
Walton, S., A. Fournier, B. Gillich, J. Hosto, W. Boughers, C. Andres, C. Miser, J. Huber, and M. Swearingen. 2008. Summary Report of Laser Scanning Method Certification Study for Body Armor Backface Deformation Measurements. Aberdeen Proving Ground, Md.: Aberdeen Test Center.