The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
The initial meeting was attended by Charles Miller, of CDC, and John Till and Paul Voillequé, of RAC, who presented an overview of their work and responded to questions raised by members of the committee. A second meeting of the committee occurred on October 26, 2001, at the Beckman Center in Irvine, California; its principal aim was to complete the committee’s report to CDC.
The paragraphs that follow set out the committee’s evaluation of the draft RAC report. Our comments are organized around the questions above. The committee notes that questions 1 and 2 deal with the quality and completeness of the RAC report whereas questions 3 and 4 are related to issues stemming from that report but not specifically identified in the task description. In the two appendixes, the committee gives examples of specific issues associated with the RAC report that need to be addressed (Appendix A) or offers editorial suggestions (Appendix B).
Question 1. Were the methods and sources of information used in the draft reportappropriate?
The methods used in the RAC report to estimate worst-case doses to people living or working near the Hanford production facilities from radioactive particles and short-lived radionuclides are not entirely appropriate. Pessimistic assumptions or parameter values were used in some aspects of the analyses, arbitrary values in others, and median values elsewhere. The factors by which the resulting doses overestimate the doses that would have been obtained by using realistic assumptions and parameter values are not calculated, but they are likely to differ from one scenario to another. It would have been more logical to use realistic assumptions and best estimates of the parameter values throughout the dose calculations and to multiply the resulting realistic dose estimates by the same safety factor for all scenarios.
It should be noted that the National Council on Radiation Protection and Measurements (NCRP) techniques (Report 123, 1996) used to screen the radionuclides are not entirely appropriate in that they were “designed primarily for facilities that handle small quantities of radioactive materials released as point-source emissions…and apply to intermittent or continuous releases of radionuclides to the environment during routine operations over a period of 30 years with exposure to the releases assumed to be during a 1 year period of the last year.” In addition, the criteria used to eliminate four radionuclides (89Sr, 91Y, 95Zr, and 141Ce) are not specified and were based on calculations made with releases for only 8 months; the 8 months were selected between October 1945 and February 1956 in a seemingly arbitrary manner and all months were given the same weight. The NCRP techniques were not used for the evaluation of the emission of large radioactive particles, and rightly so, although no conceptual basis was given for separating them from other radioactive releases.
The rationale for and basis of worst-case estimates need to be clearly articulated; they are intended to serve only as guidance regarding the potential need for further study, not as indicators of realistic doses to people. Put another way, the purpose of estimating worst-case doses was to scope the potential nature and extent of the risk associated with the releases being studied. However, a worst-case analysis does not provide that insight if the cases are not credible. It warrants noting that the worst-case scenarios that were studied were not defined by CDC; they were chosen by RAC, apparently without guidance or consensus from a broader panel