Advisers to the Nation on Science, Engineering, and Medicine
National Academy of Sciences
National Academy of Engineering
Institute of Medicine
National Research Council
DIVISION ON EARTH AND LIFE STUDIES
Board on Radiation Effects Research
Dr. James M.Smith
Radiation Studies Branch
Centers for Disease Control and Prevention
4770 Buford Highway, NE Mailstop F35 Atlanta, GA 30341–3742
December 19, 2001
Dear Dr. Smith:
This letter report is written in response to a request from the Radiation Studies Branch of the Centers for Disease Control and Prevention (CDC) that the National Research Council convene a committee to review a draft report prepared by the Risk Assessment Corporation (RAC) titled Methods for Estimating Radiation Doses from Short-Lived Gaseous Radionuclides and Radioactive Particles Released to the Atmosphere During Early Hanford Operations. As a preliminary to this review, CDC provided the committee with a copy of the contract under which the work of RAC was implemented. The task order (No. 3, July 3, 1996) notes that “the objective of this work is to develop the information and techniques necessary to estimate worst-case doses to people living or working near the production facilities from radioactive particles and short-lived radionuclides.” The description of the task continues: “The contractor will produce computational tools that will allow CDC to estimate the doses to the public from these short-lived nuclides and particles and test the sensitivity of these dose estimates to various input parameters.”
At the initial meeting of the committee in Washington, DC, on July 30–31, 2001, CDC specifically asked the committee to address the following questions:
Were the methods and sources of information used in the draft report appropriate?
Are the methods and results clearly presented?
This report is a screening calculation. If we proceed to do a more detailed dose reconstruction, (a) how would we improve the dosimetry, (b) what would we do to reduce the uncertainty, and (c) are the scenarios analyzed sufficiently representative?
How do we make a meaningful assessment of the risk posed by exposure to these short-lived gaseous radionuclides and radioactive particles so that we can best communicate the potential health risks to the public?
2101 Constitution Avenue, NW, Washington, DC 20418 USA
202–334–2232 (telephone) 202–334–1639 (fax) E-mail: national-academies.org
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 1
THE NATIONAL ACADEMIES Advisers to the Nation on Science, Engineering, and Medicine National Academy of Sciences National Academy of Engineering Institute of Medicine National Research Council DIVISION ON EARTH AND LIFE STUDIES Board on Radiation Effects Research Dr. James M.Smith Chief, Radiation Studies Branch Centers for Disease Control and Prevention 4770 Buford Highway, NE Mailstop F35 Atlanta, GA 30341–3742 December 19, 2001 Dear Dr. Smith: This letter report is written in response to a request from the Radiation Studies Branch of the Centers for Disease Control and Prevention (CDC) that the National Research Council convene a committee to review a draft report prepared by the Risk Assessment Corporation (RAC) titled Methods for Estimating Radiation Doses from Short-Lived Gaseous Radionuclides and Radioactive Particles Released to the Atmosphere During Early Hanford Operations. As a preliminary to this review, CDC provided the committee with a copy of the contract under which the work of RAC was implemented. The task order (No. 3, July 3, 1996) notes that “the objective of this work is to develop the information and techniques necessary to estimate worst-case doses to people living or working near the production facilities from radioactive particles and short-lived radionuclides.” The description of the task continues: “The contractor will produce computational tools that will allow CDC to estimate the doses to the public from these short-lived nuclides and particles and test the sensitivity of these dose estimates to various input parameters.” At the initial meeting of the committee in Washington, DC, on July 30–31, 2001, CDC specifically asked the committee to address the following questions: Were the methods and sources of information used in the draft report appropriate? Are the methods and results clearly presented? This report is a screening calculation. If we proceed to do a more detailed dose reconstruction, (a) how would we improve the dosimetry, (b) what would we do to reduce the uncertainty, and (c) are the scenarios analyzed sufficiently representative? How do we make a meaningful assessment of the risk posed by exposure to these short-lived gaseous radionuclides and radioactive particles so that we can best communicate the potential health risks to the public? 2101 Constitution Avenue, NW, Washington, DC 20418 USA 202–334–2232 (telephone) 202–334–1639 (fax) E-mail: national-academies.org
OCR for page 1
The initial meeting was attended by Charles Miller, of CDC, and John Till and Paul Voillequé, of RAC, who presented an overview of their work and responded to questions raised by members of the committee. A second meeting of the committee occurred on October 26, 2001, at the Beckman Center in Irvine, California; its principal aim was to complete the committee’s report to CDC. The paragraphs that follow set out the committee’s evaluation of the draft RAC report. Our comments are organized around the questions above. The committee notes that questions 1 and 2 deal with the quality and completeness of the RAC report whereas questions 3 and 4 are related to issues stemming from that report but not specifically identified in the task description. In the two appendixes, the committee gives examples of specific issues associated with the RAC report that need to be addressed (Appendix A) or offers editorial suggestions (Appendix B). Question 1. Were the methods and sources of information used in the draft report appropriate? The methods used in the RAC report to estimate worst-case doses to people living or working near the Hanford production facilities from radioactive particles and short-lived radionuclides are not entirely appropriate. Pessimistic assumptions or parameter values were used in some aspects of the analyses, arbitrary values in others, and median values elsewhere. The factors by which the resulting doses overestimate the doses that would have been obtained by using realistic assumptions and parameter values are not calculated, but they are likely to differ from one scenario to another. It would have been more logical to use realistic assumptions and best estimates of the parameter values throughout the dose calculations and to multiply the resulting realistic dose estimates by the same safety factor for all scenarios. It should be noted that the National Council on Radiation Protection and Measurements (NCRP) techniques (Report 123, 1996) used to screen the radionuclides are not entirely appropriate in that they were “designed primarily for facilities that handle small quantities of radioactive materials released as point-source emissions…and apply to intermittent or continuous releases of radionuclides to the environment during routine operations over a period of 30 years with exposure to the releases assumed to be during a 1 year period of the last year.” In addition, the criteria used to eliminate four radionuclides (89Sr, 91Y, 95Zr, and 141Ce) are not specified and were based on calculations made with releases for only 8 months; the 8 months were selected between October 1945 and February 1956 in a seemingly arbitrary manner and all months were given the same weight. The NCRP techniques were not used for the evaluation of the emission of large radioactive particles, and rightly so, although no conceptual basis was given for separating them from other radioactive releases. The rationale for and basis of worst-case estimates need to be clearly articulated; they are intended to serve only as guidance regarding the potential need for further study, not as indicators of realistic doses to people. Put another way, the purpose of estimating worst-case doses was to scope the potential nature and extent of the risk associated with the releases being studied. However, a worst-case analysis does not provide that insight if the cases are not credible. It warrants noting that the worst-case scenarios that were studied were not defined by CDC; they were chosen by RAC, apparently without guidance or consensus from a broader panel
OCR for page 1
of experts. It should be noted that some of these problems might have been avoided if the CDC had provided more specific guidance to the RAC in its original task statement. It is not clear that the information and techniques provided by RAC produce worst-case estimates, particularly for the examples in Section 5. RAC has estimated uncertainties in the emissions of radionuclides and provided that information; but in calculating dose estimates, RAC has used only the median estimates of emissions, and the HCalc tool it provides is set up to use the median estimates. RAC did not demonstrate that the resulting dose estimates are worst-case estimates, as required by the task order. Other, higher estimates of emissions (such as 95th percentile) could be used and would almost certainly result in worst-case dose estimates, but RAC has refrained from using them, because it claims that they are unreasonable. RAC has not provided sufficient information or the techniques to provide more reasonable worst-case estimates. To establish more reasonable and possible worst-case estimates, RAC needs to provide at least the following information and tools for the nonradioiodine radionuclides: Expected month-to-month correlations between the monthly emission-factor distributions provided. Expected radionuclide-to-radionuclide correlations between the monthly emission-factor distributions provided. Expected plant-to-plant correlations between the monthly emission-factor distributions provided. A tool (such as a Monte Carlo program) that can incorporate all the distributions and their correlations in the dose estimates. It is impossible to tell whether RAC considers the distributions provided for emission factors to be distributions of month-to-month variability in emission factors or simply indications of the uncertainty in the long-term average emission factors. Similarly, it is impossible to tell whether RAC assumes that the same emission factor applies to all (nonradioiodine) radionuclides in each month (that is the implication of the model that RAC uses to justify using a single distribution) or whether the emission factor might be different for different radionuclides in a given month. Finally, it is impossible to tell whether RAC considers the emission-factor estimates to be the same for all plants operating in a given month. Evaluation of those questions requires examination of the original measurements on which RAC based its information, but the committee is not in a position to perform such an evaluation: one reason is that RAC does not provide references for most such measurements. To provide the conceptual framework for what follows in RAC’s report, the different classes of exposure circumstances should be described independently of calculations. The doses from particle inhalation or ingestion should be interpreted in terms of the fraction of dose received by an organ.
OCR for page 1
Question 2. Are the methods and results clearly presented? The committee is impressed by the amount of work that the RAC report represents, but it finds the presentation of the methods and results unsatisfying in the following important respects: First, the use of several units for activity (Bq and Ci) and for dose (rem, rad, Gy, and Sv) makes the report difficult to read and difficult to compare one situation with another. Second, the report is not well organized, so it is not easily read, and its information and conclusions are not clear. For example, it would be better if the historical monitoring data were presented before the results of this analysis. Similarly, the discussion of site geography and of available measurements should come before the estimates of emission rates. As it is, the reader has no idea how the estimates in Sections 2 and 3 were obtained (and it is not all that clear even after Section 4 is read). A historical summary with a chronology of important events would substantially assist a reader who is not familiar with the site and its history. Important events include startups and shutdowns of the reactors; startups and shutdowns of the T, B, Redox, and Purex plants; first and later observations of particle emissions; and institution and retraction of specific control practices (such as the wearing of respirators in some areas). The data on details of reactor operations as they influence radionuclide releases should be presented chronologically so that temporal consequences can be more readily understood. Although the RAC report builds heavily on the information accumulated in the course of the Hanford Environmental Dose Reconstruction (HEDR) project, it would be helpful to have a clear description of the records sought, found and not found, and analyzed. The committee notes the lack of on-line availability of the extensive HEDR documents. Third, the specification of sources of information, methods of analysis, and assumptions underlying those analyses are inadequate and frequently nonexistent. For example, the committee often found it difficult to know whether some of the tabular information represented calculations made by RAC or by others or merely abridged versions of data presented more fully elsewhere. Whenever possible, data used and cited should be the original data instead of summary publications (such as quarterly reports). That would make it clear that numerical values, where available, rather than assumed models were used to determine the distributions. There is a particularly important lack of documentation regarding ruthenium particles. The RAC report states (page 3–43) that “highly radioactive particles were reported to contain about 200 µCi”, without any data, or even a reference, as documentation. The report goes on to use 300 µCi as the chosen estimate for the gastrointestinal intake highlighted in the summary chapter. It is not evident from the material available to the committee that the use of the 300 µCi value is justified. Is there frequency distribution data available to support the use of this value? Fourth, the period the report covers needs to be stated. There is no such statement except for the ambiguous “early years”. Moreover, the document contains confusing references on this issue: Page iv: “Estimates of the monthly releases of these eight radionuclides were made for the period from December 1944 through December 1961.”
OCR for page 1
Page 3–14, Section 3.2.4, Item 4 indicates a “temporal scope” of 12 years, but which 12 years is not stated. Page 3–27 refers to a limitation of HCalc to the “assessment’s temporal scope (Oct-45 through Dec-61)”. Page 4–30 states “began in the fourth quarter 1951 and continued through our time period of interest (1955)”, suggesting that 1955 is one limit. Confusion over the period considered is propagated by the source codes and data files of HCalc provided to the committee. The source codes are set up to cover the period October 1944– December 1961, but internally they contain comments that the data must be limited to October 1944–February 1956. The data files on which HCalc operates, however, contain inconsistent cutoff dates: 41Ar emissions from the reactors are cutoff at February 1956, but iodine emissions from the Redox and Purex plants continue through December 1961. The accuracy of the new codes (such as HCalc) that RAC has developed and used needs to be validated and compared with other codes, such as GENII, developed for similar purposes. The nature, methods, and results of the validation exercise should be presented in sufficient detail to permit others to assess the capability of the HCalc code to produce credible estimates. In addition, how do the calculations of HCalc compare with the estimates made in 1951 (see the bibliography: Message to R.W.Cook, US Atomic Energy Commission. Washington, DC). It would be helpful to explain why the codes used by the HEDR project could not also be used (or adapted) for this project. Fifth, the presentation of many of the findings is far from transparent. For example, it is difficult to identify the formulas underlying many of the calculations, the values of the parameters used in the calculations, or the sources of the parameter values. Similarly, a “geographic section” that explains and shows on a new map where all the areas are (or were) and the various names that they have been given at various times would be helpful. For example, even as late as page 4–30, the reader is being introduced to new names for the same areas. PSN 330, B-Battery, H-40 page 3–2 PSN 330, aka H-40 pages 4–6, 4–7, 4–9 (with PSN 320 equated to H-50, PSN 310 equated to H-51, and PSN 300 equated to H-61) PSN-300-310-320 page 3–29, treated as though a single area PSN-300-310-320 equated to PSN 50-51-61 equated to H 50-51-61 page 4–30 (suggesting a different correspondence from that previously established) Where is Columbia Camp, which first appears on page 4–24? Page 4–25 gives an approximate location, but points to a map in Appendix B-2.
OCR for page 1
Where is Ringold? The third scenario given in this report (page 3–1) considers a resident of Ringold but the location of Ringold does not seem to be shown on any map. Sixth, transcription errors are present in some of the data. Discussion with Mr. Voillequé during the first public meeting indicated that independent checking of such transcriptions (by another person) was not consistently done, nor were such simple diagnostics as graphs used. The committee recommends the use of independent checking or other techniques to avoid or minimize transcription errors. The report appears to have omitted the 239Np produced in the fuel in making emission estimates for 239Pu. In PNWD 2222 HEDR, the 239Pu estimates given in Table B-6, and apparently transcribed into Section 2.xls as the basis for emission estimates, omitted the 239Pu produced by decay of 239Np present at reactor discharge. According to the text of PNWD 2222 HEDR, the additional 239Pu present after the cooling period (at the time of processing) was to be added to provide the basis of emission rate estimates for the HEDR project. By the time of processing, this additional 239Pu averaged about 3% of the total In the RAC report, however, the emission rate estimates are independent of nuclide, so 239Np is considered to be emitted in the same fractional amount as 239Pu and will decay to 239Pu in the environment, requiring slightly different treatment from that used in the HEDR project. To summarize briefly, the report does not contain sufficient information to be considered a stand-alone document, and many data and analyses used are often not adequately reported or referenced. The committee recommends that all the data that are abstracted from original documents and used in any way in such reports, and all the analyses performed on those data be provided as supplementary information in the form of spreadsheets, program data files, and so forth, in whatever way the data were used by the analysts. The object should be to allow any reader or reviewer to reproduce exactly the analyses described in the report without having to return to the original documents to reabstract the data. It would be helpful to provide a compilation of electronic (scanned) copies of all, or at least most of, the references included in the bibliography on a companion CD-ROM, particularly inasmuch as such scanned copies are already available on the Internet (in a much less accessible way). Such a resource would greatly assist review. Question 3. This report is a screening calculation. If we proceed to do a more detailed dose reconstruction, (a) how would we improve the dosimetry, (b) what would we do to reduce the uncertainty, and (c) are the scenarios analyzed sufficiently representative? 3.a: Once very basic considerations are accommodated (such as period and duration of time spent at Hanford and whether living on-site or off-site), it appears extremely unlikely that additional dose reconstruction (using more-detailed worker information) would give a useful estimate of a personal dose. Mr. Voillequé’s remarks to the committee on uncertainty in individual dose reconstructions indicates that the correlations between true personal exposure and estimated exposure appears to be very low. Rather than focusing initially on plans for a detailed dose reconstruction at individual levels, it might be more helpful to give a broad estimate of the uncertainties in an overall collective dose and in estimates of the number of
OCR for page 1
associated excess cancer cases among the construction workers and military personnel at Hanford. The committee recommends using realistic assumptions and best estimates of parameter values throughout the dose calculations and multiplying the resulting realistic dose estimates by the same safety factor for all scenarios. A more time-consuming option would be to estimate the uncertainties attached to the most-sensitive parameter values (which need to be determined), to calculate the probability distribution of the resulting collective dose, and to use the 95th percentile. 3.b: The main purpose of providing further individualized dose reconstruction was described in the oral comments to the committee by Mr. Voillequé as in identifying sources of uncertainty rather than in reducing uncertainty. Uncertainties in a personal dose lead to uncertainties in derived quantities such as a personal probability of causation for a Hanford worker or former military personnel in whom cancer has been diagnosed. The uncertainty in an individual’s dose will always be larger than the uncertainty in the population average dose, so the uncertainty in the probability of causation or another quantity is also greater at the individual than at the population level. If the number of excess cases in the population is estimated to be small, a best estimate of excess risk or of probability of causation for any given individual is bound to be very low, because there appear to be no individuals or small groups who were disparately much more exposed than others. It is possible, however, that if a 99% upper-confidence-interval rule is used by the Department of Veterans Affairs or other agencies charged with determining compensation levels, many individuals could be compensated because of large uncertainties at the individual level even if there is much less uncertainty about the population excess number of cases; specifically the 99% upper confidence limit of the population excess number of cases might be far less than the number of persons compensated under the 99% rule. Therefore, the committee recommends that, at least initially, more focus be placed on the uncertainty in the collective dose and on the likely total excess number of cancer cases in the exposed population. The added discussion of uncertainty is needed to serve as a counterweight to the idea that the less information is known about an individual the more likely it is that he or she had a large dose. The latter idea is true in a limited sense at the individual level but is potentially absurd when information about individual exposure is not also related to total population exposure, which might be far better known. Particularly when dealing with the probability of causation, it is possible to argue in favor of relying on collective doses when good individual dosimetry is lacking. Consider a hypothetical case: the collective dose to a large group of workers might be known with some precision but no available data allow the differentiation of subjects who could have been highly exposed from others in the group. Let µ be the average dose for this group of workers and RR(µ) - 1 be the excess relative risk at this dose (we are assuming linearity in dose response so that relative risk depends only on average dose and not on details of dose distribution). If N is the number of cancers expected in a similar but unexposed population, we would expect a total of N[(RR(µ) - 1] excess cases of cancer in the hypothetical exposed population among the N[RR(µ)] total cases of cancer. In the absence of a useful individual dosimetry system, each case may be regarded as having the same chance as all others of being among the excess cases, and this probability of causation is equal to [RR(µ) - 1]/RR(µ). The uncertainty in this view of the probability of causation is strictly a function of the uncertainty of the average dose (and of the parameters in the dose-response function), but not the
OCR for page 1
uncertainty in individual dose, which can be much greater than the uncertainty in the average dose. It has been stressed in two National Research Council reviews of the radioepidemiologic tables that the concept of probability of causation requires reference to a particular population. That interpretation implies that knowledge of the collective dose is indeed crucial for assigning probabilities of causation to individuals when good individual dosimetry is lacking. By “good dosimetry” we mean one that substantially reduces the uncertainty in individual dose estimates, that is, one that has good correlation with true dose. That is not to say that uncertainty in individual exposures should never be used in considering the uncertainty of an individual’s probability of causation. The choice of the reference population depends on the specific application and an agency, such as the Department of Veterans Affairs, may implicitly or explicitly choose to deal with very small reference populations (even a single person), allowing individual uncertainty in dose to drive the calculation of uncertainty in probability of causation. Nevertheless, a reasonable case can be made for considering collective dose in the assignment of individual probabilities of causation, and this underlies our recommendation that collective dose be given prominence for the Hanford workers. For the gaseous exposures, much of the work necessary to calculate population exposure has already been performed as part of this study in that dose estimates obtained under a few scenarios on a year-by-year basis may be multiplied by the number of persons working in a given year to form rough estimates of exposure, which can then be summed over the years of interest. For the exposure to large particles, somewhat more work remains; the Survey.xls spreadsheet gives an estimate of the distributions of contact with particles and the probability of ingestion and inhalation, but these values depend upon the assumed density of particles. Assumptions concerning the number of workers working in assumed conditions are required for estimating the total number of large particles contacted, and doses computed on the basis of activity and type of contact need to be multiplied by these assumed probabilities. Nevertheless, it appears appropriate to estimate a plausible range of a total population dose due to particles. The RAC report ends with presentation of doses conditional on contact with active particles, but this is only one part of the picture, and the probability of contact needs to be fully factored into what is presented on the topic. 3.c: The report summary seems to emphasize an extreme scenario by devoting two tables and text at the conclusion of the summary to the doses to persons who hypothetically inhaled or swallowed a highly radioactive particle. We recommend that the conclusion of the summary strike a better balance between those unlikely scenarios and the “representative worst-case” scenarios. The representative worst-case person scenarios (Table S-1) did not lead to doses that would cause concern (0.027-0.11 mSv effective dose and 0.24-1.5 mGy maximal organ dose) or that merit more-detailed dosimetric assessment. However, the estimated doses from exposure to the worst-case ruthenium particles (Table S-2 and S-3) warrant more analysis. The studies referred to in Table 4–11 and the associated text appear to have at least barely sufficient information to estimate roughly the distribution of radioactivity and size for the ruthenium particles. That distribution, concatenated with more-realistic estimates of the probability
OCR for page 1
distribution of inhaling or ingesting particles (which the report’s authors indicate that they will be developing) and with likely organ-residence times of the particles, would permit better estimates of the population distribution of lung and gastrointestinal doses from these particles. The resulting dose distributions should yield a more realistic look at the putative risk associated with the particles. Question 4. How do we make a meaningful assessment of the risk posed by exposure to these short-lived gaseous radionuclides and radioactive particles so that we can best communicate the potential health risks to the public? The committee’s comments on the risk aspect of this question are given below. Although not part of its charter, the committee notes that the issues of public input and public education are not well defined, nor is it clear who is responsible in these specific areas. A workshop on the topic of how an agency or contractor can best enable the public to provide input to a dose reconstruction project and how best to communicate the potential health risks to the public to help resolve these issues is strongly recommended. Any assessment of risk in this context should be performed simply. The aim should be to develop approximate upper bounds for the expected numbers of radiation-induced cancers and deterministic effects (such as skin burns or fibrotic damage to the lungs or colon). The key question is whether there is merit in proceeding further with a more detailed risk assessment, or whether the numbers of adverse health effects are so small as to make further analysis of little utility. The next paragraph outlines the elements of information needed for these calculations. Details of First-level Risk Assessment: A meaningful assessment of risk requires defining the target population. From the RAC report, it appears that the targeted on-site population consists of construction workers and military personnel on the Hanford site and perhaps residents near the site (for example, in Ringold). Because the worker and military populations were continuously changing, it would be advisable to estimate population sizes for several of the key periods (such as 1945–1947 for 131I exposure and 1952–1954 for Ru exposure). Approximate age distributions for the military and worker populations could be assumed. Distributions of doses from the various radionuclides could be estimated for each of these periods. Then, given the estimated number of subjects and their ages and doses, the number of excess cancers could be estimated by using International Commission on Radiological Protection (ICRP) or other published risk coefficients, subject to appropriate consideration of modifying factors (see Risk Modifiers section below). The probable number of such deterministic effects would depend on the estimated frequency distribution of radioactivity levels of the hot particles, the probability of skin, or lung, or gastrointestinal exposure to a hot particle, and the assumed dose (or frequency distribution of doses, reflecting residence time of the particle, and so on) to the target organ by a hot particle with a given amount of activity. With that information, one could address the question of the frequency of acute doses to particular organs that would be high enough to cause deterministic effects of clinical significance. The information in NCRP Report 130 (NCRP, 1999) could be used to define the threshold dose (and product of dose and volume) for various deterministic effects.
OCR for page 1
Risk Modifiers: An inverse age-at-irradiation effect with respect to thyroid-cancer risk is well documented (Ron et al., 1995) and needs to be taken into account in considering the risk to the target population from 131I exposures. Although the Japanese atomic-bomb study suggests that there might be no thyroid-cancer risk when irradiation occurs after the age of 30 years, studies of thyroid irradiation from Chernobyl and from hyperthyroidism treatment suggest a risk, albeit much smaller than that posed by childhood irradiation. Similarly, with respect to hot particles on the skin, the studies of radiogenic skin-cancer risk have indicated an inverse age-at-irradiation effect, although the data are not as extensive as those on thyroid cancer (Shore, 2001). For the gaseous exposures to low-LET radiation that occurred over a protracted period and delivered low doses, a dose and dose-rate effectiveness factor (DDREF) should be incorporated into the calculations of risk (NCRP Report 64, 1980; ICRP Report 60, 1991). However, the doses from hot particles are delivered to a small volume of tissue at relatively high dose rates, so a DDREF would not be warranted in this case. If you desire elaboration of the comments above or in the accompanying appendixes, please do not hesitate to call or write either Dr. Isaf Al-Nabulsi or myself. Sincerely yours, William J. Schull, Chairman
OCR for page 1
COMMITTEE TO REVIEW METHODS FOR ESTIMATING RADIATION DOSES TO WORKERS AT HANFORD WILLIAM J. SCHULL (Chair), Professor Emeritus, Human Genetics Center, Houston, TX BRUCE B. BOECKER, Scientist Emeritus, Lovelace Respiratory Research Institute, Albuquerque, NM ANDRÉ BOUVILLE, National Cancer Institute, Bethesda, MD A. BERTRAND BRILL, Vanderbilt University Medical School, Nashville, TN MELVIN W. CARTER, Neely Professor Emeritus, Georgia Institute of Technology, Dunwoody, GA EDMUND A. C. CROUCH, Cambridge Environmental, Inc., Cambridge, MA SHARON M. FRIEDMAN, Lehigh University, Bethlehem, PA SUSAN E. LEDERER, Yale University School of Medicine, New Haven, CT MILTON LEVENSON, Menlo Park, CA DONALD E. MYERS, University of Arizona, Tucson, AZ ROY E. SHORE, New York University School of Medicine, New York, NY DANIEL O. STRAM, University of Southern California, Los Angeles, CA NATIONAL RESEARCH COUNCIL STAFF EVAN B. DOUPLE, Director, Board on Radiation Effects Research ISAF AL-NABULSI, Study Director DIANNE STARE, Project Assistant DORIS E. TAYLOR, Staff Assistant SPONSOR’S PROJECT OFFICER JAMES SMITH, Centers for Disease Control and Prevention EDITOR NORMAN GROSSBLATT