average of all the entries. Those observations further suggest a singular character for those failures, possibly related to the level of maintenance, inspection, and control resources that may be available for large as opposed to small projects. The only remaining failure (Southwest) does not appear to have major potential disqualifying issues except for reported difficulties in consistently meeting CP goals as well as potential PE damage from aggregate during placement.1 Consequently, the author estimates that only one failure merits serious consideration for analysis against the expectations from the benchmark stated by the Bureau of Reclamation.

The second issue concerns this author’s disagreement with the analysis methodology used in Chapter 4 to compare evidence of field failures with the expectations from the benchmark of 0.000044 failures per mile per year stated by Reclamation, which will be considered as an agency-specified parameter in the following. The author contends that because of the very sparse DIP with PE and CP failure data set, interpretation of those data to calculate a nominal failure frequency for comparison to that of the benchmark is not appropriate, as it is akin to comparing over a short period of time the nominal death rate from a small community to that of a large city. Thus, comparison between this nominal rate and benchmark rates, as used in Chapter 4 and cited in Chapter 6, “Findings, Conclusions, and Recommendations,” is not warranted in the view of the author. The corresponding sensitivity analysis in Chapter 4 of the report does not resolve this concern, as such analysis would be an extension of assigning undue significance to the nominal rate.

The author proposes instead to estimate, using the benchmark rate, the probability of having the number of failures observed in the DIP with PE and CP experience inventory for the given amount of pipe length-years in that inventory. If that probability is found to be appreciably large, then the DIP with PE and CP failure data set may not be indicative of diminished performance compared to that of the benchmark set. Conversely, if that probability is found to be very small, the DIP with PE and CP failure data may be seen as indicative of diminished performance relative to the benchmark. It is emphasized that this analysis is limited only to the implications of observed failure events. Other evidence of performance such as the presence of corrosion in the absence of failures was considered elsewhere in the report, as well as in discussions based on corrosion engineering principles.

In the following, the benchmark failure rate stated by Reclamation will be assumed to be numerically equal to the probability P of one failure occurring per mile per year in a hypothetical large reference system (or “benchmark system”), so P = 4.44 × 10−5. That assumption is adopted considering that the arguments presented by Reclamation in developing the benchmark involved a considerable


Graham E.C. Bell, Schiff Associates, “Measurements of Performance of Corrosion Control Mechanisms on DIP,” presentation to the committee, Washington, D.C., July 29, 2008.

The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement