4
Review of Recent Data on Radiation Epidemiology, Biology, and Dosimetry

The statement of task from the Health Resources and Services Administration (HRSA) to the committee requests that we assess the most recent scientific information related to radiation exposure and associated cancers to determine whether there is new information that could affect the magnitude of radiation cancer-risk estimates. If there is, it would provide part of the information base that is needed for considering the inclusion of new populations and new geographic areas in the Radiation Exposure Compensation Act (RECA) populations.

The risk estimates for human cancers after exposures to low-LET ionizing radiation are based on human tumor frequencies, which come mainly from cancer mortality data on the survivors of the atomic-bomb detonations at Hiroshima and Nagasaki (NRC, 1990; ICRP, 1991; NCRP, 2001; reviewed in Wakeford, 2004). Risk estimates for high-LET radiation are based on mortality data on uranium and other underground miners exposed to radon (NRC, 1999) and on the radium-dial painters (NRC, 1988; reviewed in Wakeford, 2004). The responses at very low exposures to low-LET radiation are estimated by extrapolation of data on atomic-bomb survivors over the available low- to moderate-dose range (0.005–2 Sv). The extrapolation model used is the linear nonthreshold (LNT) one (NCRP, 2001) that is discussed in Chapter 3. Support for the use of the LNT model for estimates of cancer risks posed by low-LET radiation comes from human epidemiologic studies (medical and occupational), experimental-animal tumor studies, and cellular-radiation studies (NCRP, 2001). The data from similar but fewer studies involving high-LET exposures support the use of the LNT model here also (NCRP, 2001). The same types of studies are used to provide estimates of the effects of dose fractionation and dose protraction (NCRP, 2001). Epidemio-



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 73
Assessment of the Scientific Information for the Radiation Exposure Screening and Education Program 4 Review of Recent Data on Radiation Epidemiology, Biology, and Dosimetry The statement of task from the Health Resources and Services Administration (HRSA) to the committee requests that we assess the most recent scientific information related to radiation exposure and associated cancers to determine whether there is new information that could affect the magnitude of radiation cancer-risk estimates. If there is, it would provide part of the information base that is needed for considering the inclusion of new populations and new geographic areas in the Radiation Exposure Compensation Act (RECA) populations. The risk estimates for human cancers after exposures to low-LET ionizing radiation are based on human tumor frequencies, which come mainly from cancer mortality data on the survivors of the atomic-bomb detonations at Hiroshima and Nagasaki (NRC, 1990; ICRP, 1991; NCRP, 2001; reviewed in Wakeford, 2004). Risk estimates for high-LET radiation are based on mortality data on uranium and other underground miners exposed to radon (NRC, 1999) and on the radium-dial painters (NRC, 1988; reviewed in Wakeford, 2004). The responses at very low exposures to low-LET radiation are estimated by extrapolation of data on atomic-bomb survivors over the available low- to moderate-dose range (0.005–2 Sv). The extrapolation model used is the linear nonthreshold (LNT) one (NCRP, 2001) that is discussed in Chapter 3. Support for the use of the LNT model for estimates of cancer risks posed by low-LET radiation comes from human epidemiologic studies (medical and occupational), experimental-animal tumor studies, and cellular-radiation studies (NCRP, 2001). The data from similar but fewer studies involving high-LET exposures support the use of the LNT model here also (NCRP, 2001). The same types of studies are used to provide estimates of the effects of dose fractionation and dose protraction (NCRP, 2001). Epidemio-

OCR for page 73
Assessment of the Scientific Information for the Radiation Exposure Screening and Education Program logic studies are also used for estimating risks to specific exposed populations, such as underground miners exposed to radon (NRC, 1999) and populations exposed to iodine-131 (131I) (UNSCEAR, 2000). The International Commission on Radiological Protection (ICRP) and the National Council on Radiation Protection and Measurements (NCRP) are moving to use tumor incidence, rather than mortality, in their revised cancer risk estimates. Using tumor-incidence data for developing risk estimates provides an additional useful measure of risk because morbidity entails health, emotional, and financial costs to the individual and society. In this chapter, we consider and present the evidence from new or updated epidemiologic studies, radiation-biology advances, or dosimetry approaches that could result in significant changes in the risk estimates for human cancer induced by ionizing-radiation exposure. This chapter brings together information that could influence compensation for diseases currently covered by RECA legislation. In Chapter 7, we discuss additional diseases brought to our attention by members of the public at a series of hearings held in response to community invitations with a view to whether eligibility for coverage should be extended thereto. The following sections discuss what is new in those fields of study. RECENT DEVELOPMENTS IN RADIATION EPIDEMIOLOGY Epidemiologic studies of the Japanese survivors of the atomic bombs and of other populations exposed to radiation medically, occupationally, or accidentally have characterized the long-term health effects of radiation (see Chapter 3). Risks estimates for radiogenic cancers and nonmalignant diseases now compensable under RECA come primarily from epidemiologic studies of uranium and other underground miners exposed to radon and from studies of the atomic-bomb survivors. The mining populations were exposed primarily to radon internally while the atomic-bomb survivors were exposed primarily to external gamma rays. Risk estimates for thyroid cancer also come from populations exposed to external x and gamma rays, and internally to radioiodine. Studies of worker populations exposed to low or very low doses of low LET radiations over long periods provide radiogenic-cancer risk estimates with which the more precise estimates obtained from the atomic-bomb survivors can be compared to evaluate their applicability to populations chronically exposed to low radiation levels. Extensive and detailed reviews of those studies have been reported previously (NRC, 1990, 1998; 1999; ICRP, 1991; UNSCEAR, 1993, 2000; IARC, 2000; 2001). A comprehensive reassessment of risk estimates is included in a companion, forthcoming report from the National Research Council Committee on Biological Effects of Ionizing Radiation (BEIR) specifically, the Committee on Health Risks from Exposure to Low Levels of Ionizing Radiation (BEIR VII).

OCR for page 73
Assessment of the Scientific Information for the Radiation Exposure Screening and Education Program Risks to the Health of Miners, Millers, and Ore Transporters Studies of Uranium Miners Epidemiologic studies of underground miners have identified an increased risk of primary lung cancer associated with exposure to alpha-particle radiation from decay products of inhaled radon (NRC, 1988). Those studies generally need relative-risk (RR) models; estimates are discussed below. Although absolute risk is important from a public-health perspective, we choose to discuss RR and excess relative risk (ERR; ERR = RR − 1) because of their use in the cited literature. The most recent and widely recognized lung-cancer risk estimates associated with radon exposure were reported in the BEIR VI report (NRC, 1999). An important finding of the BEIR VI committee relevant to some of the RECA populations—identified as uranium miners, uranium millers, and ore transporters—is that the ERR of radiogenic lung cancer decreases with increasing attained age and time since exposure. The eligible people now seeking compensation are generally more than 60 years old and have been out of the mines for 30 years or more. Accordingly, they are at much lower RR for radiogenic lung cancer now than they were in earlier years after retiring from mining uranium. Using data on 11 international cohorts, the BEIR VI committee estimated that uranium miners 65-74 years old have about 25% of the ERR of radon-induced lung cancer that miners in their 50s have. The most recent analysis of the Colorado Plateau uranium-miner data (Hornung et al., 1998) estimated that the ERR for lung cancer in miners in their 70s was less than 10% of that of miners in their 50s. Similarly, the BEIR VI committee estimated that miners of the same age who have been out of the mines for more than 25 years have less than half the lung-cancer ERR of recently retired miners. The analysis of the Colorado Plateau miner lung-cancer data indicated a 65% reduction in ERR for miners who have been out of the mines for more than 25 years. Those analyses also have shown a synergistic relationship between exposure to radon and cigarette smoking. That is, the ERR of radiogenic lung cancer in smoking miners is greater than the sum of the ERRs of lung cancer associated with smoking alone and radon alone. The most recent analysis of the Colorado Plateau uranium miners cohort (Hornung et al., 1998) and the pooled analysis of 11 miner cohorts in BEIR VI (NRC, 1999) each found that the joint effect was greater than additive but less than multiplicative. The nature of the interaction was that never-smokers had about 3 times the ERR per WLM of ever-smokers in both analyses. These findings were supported by a study of non-smoking uranium miners in the Colorado Plateau (Roscoe, 1997) who had an SMR = 12.7 for lung cancer compared with the overall SMR = 5.8 in the entire cohort. In the most recent update of all cancer mortality in the Colorado Plateau uranium miners’ all-cause mortality study (Roscoe, 1997), the cohort of 3,238

OCR for page 73
Assessment of the Scientific Information for the Radiation Exposure Screening and Education Program white male miners was followed to determine certified causes of deaths in 1960-1990. Their mortality experience was compared with the combined mortality in neighboring states. Most of the findings of the study were consistent with those of previous studies of this miner population. The standardized mortality ratios (SMRs) for lung cancer and pneumoconiosis continued to show statistically significant increases (371 deaths, SMR = 5.8, 95% CI [confidence interval] = 5.2-6.4 and 41 deaths, SMR = 24.1, 95% CI = 16.0-33.7, respectively). The SMRs for lung cancer and pneumoconiosis increased with increasing level of radon-decay products and with duration of employment in the mines. Roscoe (1997) concluded that lung cancer and pneumoconiosis remain the most important long-term causes of death in this cohort. The most definitive study of cancer other than lung cancer among miners exposed to radon was the meta-analysis of data on the 11 international miner cohorts reported by Darby et al. (1995). The men in those cohorts (N = 64,209) had been employed in underground mines for an average of 6.4 years; they had an estimated average annual cumulative exposure to radon of 155 working-level months (WLM) and an average followup of almost 17 years. The RR of all cancer causes of death combined other than lung cancer (N = 1,179) was similar to the expected value (RR = 1.01, 95% CI = 0.95-1.07), on the basis of the mortality of the general populations in areas around the mines. Those results should be interpreted cautiously since they are likely to underestimate the true RR in the uranium miner population due to the Healthy Worker Effect. The authors concluded that the study provided strong evidence that high concentrations of radon in air do not cause a substantial risk of mortality from cancer other than lung cancer. Studies of Uranium Millers and Ore Transporters Risks to the health of uranium millers and ore transporters from occupational exposure have not been as well characterized as the risks to miners’ health because of smaller sample sizes and little or no data on individual exposures. Exposures to millers were primarily from inhalation of dusts containing uranium, silica, and vanadium. Their internal exposure posed potential health hazards from radiation (alpha particles) and from the chemical toxicity of uranium compounds arising during the conversion of uranium ore to yellow cake (see Chapter 3). A study of mortality among 662 millers from the Colorado Plateau who were followed from 1950 through 1967 (Archer et al., 1973) found four deaths from lymphatic and hematopoietic cancers combined (excluding leukemia), for a small and nonstatistically significant increase over the rate in the US general population. A later larger and more powerful study evaluated mortality among an expanded cohort of millers in the same area (N = 2,002) who were followed through 1971 (Waxweiler et al., 1983). They found no statistically significantly

OCR for page 73
Assessment of the Scientific Information for the Radiation Exposure Screening and Education Program increased RRs of mortality from any malignant (radiogenic or other) neoplasm, including renal cancer. The only statistically significant increase in disease risk in that cohort was for nonmalignant respiratory disease (55 deaths, SMR = 1.63, 95% CI = 1.23-2.12); however, there was no evidence that the risk increased with increasing length of employment. A nonstatistically significant ERR of death from chronic (nonmalignant) renal disease (6 deaths; SMR = 1.67, 95% CI = 0.60-3.5) was also found, but it did not appear to be related to work in the mills. Pinkerton et al. (2004) updated the Waxweiler et al. study by extending the vital-status followup by 27 years December 31, 1998. The authors completely reviewed and updated all work histories and recoded errors found in previous files. They also limited the study cohort to men who met the original cohort definition, never worked in uranium mines, and worked in one or more of seven mills whose personnel records were originally microfilmed. That redefinition of the study cohort resulted in a reduction in the size of the cohort from 2,002 in the Waxweiler et al. study to 1,485. Because exposure estimates were not available for individual workers, Pinkerton et al. used life-table analyses to compare mortality in the workers with that in the general US population. Mortality from all causes combined (810 deaths, SMR = 0.92, 95% CI = 0.86-0.99), including all cancers (184 cancer deaths observed, SMR = 0.90, 95% CI = 0.78-1.04), was less than expected on the basis of US rates. A statistically significant increase in nonmalignant respiratory disease mortality was found (100 deaths, SMR = 1.43, 95% CI = 1.16-1.73). No statistically significant increase was found in mortality from lung cancer (78 deaths, SMR = 1.13, 95% CI = 0.89-2.35) or chronic renal disease (6 deaths, SMR = 1.35, 95% CI = 0.58-2.67). No positive trend in excess mortality from these or any other types of cancer with duration of employment was found. There have been few studies of morbidity among uranium millers. Thun and colleagues examined renal toxicity in a group of 39 uranium millers compared with 36 cement-plant workers (Thun et al., 1985). They found a weak dose-response relationship for excretion of beta-2-microglobulin among millers working in the yellowcake drying and packaging area, the area with the highest exposure to soluble uranium. They concluded that the results suggested reabsorbtion of low-molecular-weight proteins consistent with uranium nephrotoxicity. More recently there have been two studies of uranium workers that were engaged in production activities using the uranium coming from the mills. A study of uranium enrichment workers (McGeoghegan and Binks, 2000a) in the UK found no overall excess mortality or morbidity due to any cancer when compared to non-radiation workers. They did find, however, a significant dose-response relationship for bladder cancer when external radiation dose was lagged by 20 years. A similar study by the same investigators regarding workers involved in the production of nuclear fuels and uranium hexafluoride (McGeoghegan and Binks, 2000b) found no significant association of radiation exposure and any cancer with the exception of Hodgkin’s disease (both mortality and morbidity).

OCR for page 73
Assessment of the Scientific Information for the Radiation Exposure Screening and Education Program They also reported a significant association with morbidity due to nonHodgkin’s lymphoma. They noted that these associations were not likely to be causal. The committee is unaware of any epidemiologic studies of ore transporters. Like the millers’ exposure, their primary potentially hazardous exposure was to ore dusts, probably with a greater risk of chemical toxicity than radiation toxicity. The nature of their work makes it unlikely that their body burdens of soluble uranium compounds exceeded renal thresholds for chemical toxicity or that their exposure to radiation from the ores substantially exceeded normal background levels. Risks to Downwinders and Onsite Participants at US Nuclear Tests Several populations have been at risk of exposure to ionizing radiation of types similar to those of downwinders and onsite test participants. Followup studies of the other populations provide information about the long-term health effects of such exposure; some also provide data from which estimates of the risks of radiation-related or radiogenic diseases, primarily malignant diseases, are calculated. We discuss here new information from specific population studies that adds to the knowledge and understanding of the types and magnitude of the health risks for which downwinders and onsite test participants currently are compensated. Radiogenic Cancers and Other Diseases Information on radiation risks is summarized in many of cited sources (UNSCEAR, NRC, and NCRP) and chapters in several textbooks dealing with the subject (Mettler and Upton, 1995; Hall, 2000). Updated information is scheduled to appear shortly in a report from the BEIR VII committee. The risk estimates in BEIR VII take into account DS02 data for the atomic-bomb survivors that were not available to this committee, and those data should be used whenever there are significant discrepancies between findings we survey from literature published in the last 20 years, and the BEIR VII update based on reanalysis of current data. Long-term studies of irradiated populations continue to provide new information on effects from internal and external sources of exposure. Effects of high-dose-rate exposure are chronicled in reports of findings in the Japanese atomic-bomb survivors supplemented by data from several large studies of radiation workers exposed to low-dose-rate radiation. The lower dose-rate exposures received by worker populations, along with data from medical-therapy populations add to the current status of knowledge of the risks in humans of the different radiogenic diseases with respect to rate and amount of radiation dose to body organs and the total body. Dose from internal emitters is protracted because it is delivered over the decay time of the particular radionuclide. Effects of internal emitters of low

OCR for page 73
Assessment of the Scientific Information for the Radiation Exposure Screening and Education Program linear energy transfer (low LET) are less than those of comparable doses delivered in a single high-dose-rate exposure, because there is continuing repair of sublethal damage when a dose is delivered at a low dose rate. The need to expand knowledge of radiation effects of 131I has led to many studies, some of which continue. The dose to the thyroid from 131I per unit intake is about 1,000 times higher than the dose received by other normal organs. The dose to different body organs from other fallout radionuclides is much lower because of low uptake and retention in different organs (CDC-NCI, 2001). Increased incidence of thyroid cancer has been observed in children who received high 131I doses, but no increase in leukemia from the lower bone marrow doses received following 131I doses from fallout has been statistically confirmed. Continuing studies of health effects in persons resident in Southern Utah during the high NTS fallout years reveal marginally significant increases in thyroid neoplasms and leukemia in children. The studies of disease in the Japanese atomic-bomb survivors provide the most reliable information for risk assessment for several reasons: They received a wide range of dose; and, unlike medical subjects, the population is composed of people with a typical range of health conditions prior to their exposure. Large numbers of subjects in well-defined cohorts have been studied over many years. Very good followup involving a range of ages and both sexes has resulted in many person-years of followup which is needed for valid statistical analyses. Good estimates of dose have been calculated for each member of the cohort as a result of in-depth dosimetry studies (Dosimetry System 02, DS02). The new dosimetry system recently introduced incorporates refinements taking into account shielding histories and new information on neutrons (Preston et al., 2004; DS02 to be published in 2005). Periodic publications update findings from the joint US-Japanese Radiation Effects Research Foundation studies of the several a priori-defined cohorts and subcohorts of the survivors of the atomic bombs dropped in 1945. The best established information on cancer mortality and cancer incidence comes from the large Life Span Study (LSS) cohort, buttressed by results of special studies of cancer in children born of irradiated parents (Izumi et al., 2003), and of leukemia mortality in children who were in utero at the time of the bombs (Delongchamp et al., 1997). In the absence of statistically meaningful data from fallout-exposed populations themselves, risk estimates from the atomic-bomb survivors are the best data we have to assess the magnitude and kinds of effects expected in downwinders and onsite test participants. Thyroid cancer is discussed in a separate section, which compares the results of studies in different irradiated populations.

OCR for page 73
Assessment of the Scientific Information for the Radiation Exposure Screening and Education Program Atomic-Bomb Survivor Studies Important cancer-mortality findings reported since 1990 and the results of new incidence studies are summarized below. Cancer Mortality Cancer mortality through 1990 was analyzed on the basis of the DS86 dosimetry system. The major findings include Most of the excess deaths from leukemia occurred in the first 15 years after exposure. For solid cancers, the excess risk was consistent with a life-long increase in age-specific cancer risk. The excess relative lifetime risk per sievert for solid cancers in persons exposed at the age of 30 was about three times greater than for persons exposed at age 50, and the projected lifetime risks for those exposed at age 10 were 1.0 to 1.8 times higher than the estimates for those exposed at age of 30 years. Excess risks of solid cancers were linear up to about 3 Sv, but they were nonlinear for leukemia, for an estimated risk at 0.1 Sv of about 1/20 the risk at 1.0 Sv (Pierce et al., 1996). More recently, the findings were extended through 1997 (Preston et al., 2003). The study included 9,335 deaths from solid cancer and 31,881 deaths from noncancer diseases on the basis of a 47-year followup. About 440 (5%) of the solid-cancer deaths were attributed to the radiation exposure. The excess risks of solid cancer were linearly related to dose down to the lowest dose studied (0-150 mSv). Results demonstrated that ERRs declined with increasing attained age (age at death); another was that the ERR was highest for those exposed as children. There was no direct evidence of radiation effects after doses less than about 0.5 Sv (Preston et al., 2003). Cancer Incidence Cancer incidence in the atomic-bomb survivors is based on data in the Hiroshima and Nagasaki tumor registries. Among 79,972 individuals in the extended Life Span Study (LSS-E85), 8,613 had a first primary solid cancer diagnosed between 1958 and 1987 (Thompson et al., 1994). Cancer cases occurring among members of the LSS-E85 cohort were identified in the Hiroshima and Nagasaki tumor registries and special efforts were made to ensure complete case ascertainment, data quality, and data consistency in the two cities. Dosimetry System 1986 (DS86) organ doses were used for computing risk estimates. Ron et al. (1994) compared results from an analysis of 9,014 first primary incident cancers diagnosed in 1958-1987 in LSS cohort members and compared incidence with mortality rates based on analysis of 7,308 death certificates that listed cancer as the underlying cause of death. When deaths were limited to those

OCR for page 73
Assessment of the Scientific Information for the Radiation Exposure Screening and Education Program occurring in the same interval in persons living in Hiroshima or Nagasaki, there were 3,155 more incident cancer cases than cancer deaths overall and 1,262 more incident cancers of the digestive system than deaths from cancers of this system. For many cancers, the incidence series was at least twice as large as the comparable mortality series, and both had significant dose-response relationships. For all solid tumors, the estimated ERR at 1 Sv (ERR1Sv) for incidence (ERR1Sv = 0.63) is 40% larger than the ERR based on mortality data from 1950-1987 in all of Japan (ERR1Sv = 0.45). The corresponding excess absolute risk (EAR) point estimate is 2.7 times greater for incidence than for mortality. For some cancer sites, the difference in the magnitude of risk between incidence and mortality is greater. The differences reflect the greater diagnostic accuracy of the incidence data and the lack of full representation of radiosensitive but relatively nonfatal cancer, such as breast, skin, and thyroid cancers—in the mortality data. Incidence and mortality data provide complementary information for risk assessment (Ron et al., 1994). The observations made in the tumor-registry studies are summarized in Table 4.1. A survey of breast-cancer incidence in the LSS population found 1,093 breast cancers diagnosed during 1950-1990. A linear and statistically highly significant radiation dose-response relationship was found. Exposure before the age of 20 years was associated with higher ERR1Sv than exposure at greater ages, with no evidence of consistent variation with age of exposure for ages under 20 years. ERR1Sv was observed to decline with increasing attained age, with the largest drop around the age of 35 years (Land et al., 2003). The EAR was not reported, but it probably changed in the opposite direction, but to a lesser extent. The incidence of leukemia, lymphoma, and myeloma in the LSS cohort from late 1950 through the end of 1987 was analyzed on the basis of followup of TABLE 4.1 Tumor Incidence Rates Observed in the Japanese Atomic-Bomb Survivors (1994) Thompson et al., 1994 ERR1Sv EAR 10−4 PY Sv Ron et al., 1994 ERR1Sv All solid cancer 0.63 29.7 Significant increased risk 0.63 Stomach 0.32   Significant increased risk   Colon 0.72 Significant increased risk Lung 0.95 Significant increased risk Breast 1.59 Significant increased risk Ovary 0.99 Significant increased risk Urinary bladder 1.02 Significant increased risk Thyroid 1.15 Significant increased risk Liver 0.49 Significant increased risk Nonmelanoma skin 1.0 Not stated Salivary gland   Significant increased risk

OCR for page 73
Assessment of the Scientific Information for the Radiation Exposure Screening and Education Program 93,696 survivors accounting for 2,778,000 PY (Preston et al., 1994). The analyses added 9 years of followup for leukemia and 12 years for myeloma to previous reports, and included the first analysis of lymphoma incidence in this cohort. The leukemia registry and the Hiroshima and Nagasaki tumor registries, included a total of 290 leukemia, 229 lymphoma, and 73 myeloma. The primary analyses were restricted to first primary tumors diagnosed among residents of the cities or surrounding areas with DS86 dose estimates of 0 - 4 Gy (231 leukemia, 208 lymphoma, and 62 myeloma) and used time-dependent models for the EAR. Separate analyses were reported for acute lymphocytic leukemia (ALL), acute myelogenous leukemia (AML), chronic myelocytic leukemia (CML), and adult T-cell leukemia/lymphoma (ATL). There were few cases of chronic lymphocytic leukemia (CLL) in the Japanese population independent of radiation exposure, so CLL was excluded from later leukemia risk analyses. There was strong evidence of radiation-induced risks for all subtypes except ATL, and there were substantial subtype differences with respect to the effects of sex and age at exposure and in the temporal pattern of risk. The AML dose-response function was nonlinear, whereas there was no evidence against linearity for the other subtypes. When averaged over the followup period, the EAR estimates (in cases per 104 PY Sv) were 0.6, 1.1, and 0.9 for ALL, AML, and CML, respectively. The corresponding estimated average ERRs at 1 Sv are 9.1, 3.3, and 6.2 respectively. There was some evidence of an increased risk of lymphoma in males (EAR = 0.6 case per 104 PY Sv) but no evidence of any excess in females. There was no evidence of an excess risk of multiple myeloma in these analyses. Mortality from Leukemia and Solid Cancers in Children Exposed in Utero Cancer mortality through 1992 was assessed in 807 atomic-bomb survivors exposed in utero and in 5,545 survivors who were less than 6 years old at time of exposure (Delongchamp et al., 1997). Doses in both groups were at least 0.01 Sv. Mortality was compared with that in low-dose group (10,453 persons with little or no exposure). Ten cancer deaths were observed among in utero-exposed persons, with a statistically significant dose-response relationship and an ERR per sievert of 2.1 (90% CI = 0.2-6.0). That estimate did not differ substantially from that for survivors exposed during the first 5 years of life. The cancer deaths among those exposed in utero included leukemia (two), female-specific organs (three), and digestive organs (five). Nine of the deaths occurred in females (ERR/Sv = 6.7, 90% CI = 1.6-17), and much of the effect was due to female-specific cancers (ERR/Sv = 9.7, 90% CI = 0.7-42). Those risks did not differ significantly from those seen in females exposed as children. No deaths from solid cancer occurred in males exposed in utero. Mortality in males and females differed even when female-specific cancers were excluded from the comparison. There were only two leukemia deaths among those exposed in utero, but the leukemia death

OCR for page 73
Assessment of the Scientific Information for the Radiation Exposure Screening and Education Program rate in this group was still marginally higher than in the comparison group (p = 0.054). The authors expressed caution in the interpretation of the data because of the number of cancer deaths was small, and because of the unexplained difference in mortality from solid cancer between sexes (Delongchamp et al., 1997). Their tentative conclusions were that the study provided support for a somewhat higher risk during the first trimester of pregnancy, that the increased risk persisted through childhood into the adult years, and that the pattern of diseases was similar after in utero and childhood exposure. Because of the wide uncertainty range, they concluded that their data did not exclude the possibility that the cancer risk from in utero exposures could be several times higher than the risk from childhood exposure. A comprehensive review of the uncertainties contained in the different published studies is provided by Boice and Miller (1999). They discuss the confounding of reasons for referral with the risk of pelvimetry and conclude that “although it is likely that in utero radiation presents a leukemia risk to the fetus, the magnitude of the risk remains uncertain.” They found the causal nature of the risk of cancers other than leukemia to be less convincing, and the similar relative risk (RR = 1.5) for virtually all forms of childhood cancer suggested an underlying bias. Chapter 8 in Mettler and Upton (1995) also provides a broad review of current knowledge regarding the effects of radiation exposure in utero. Conclusion Continuing investigations in the Japanese atomic-bomb survivors confirm and extend the evidence defining cancer mortality and risk after total-body high-dose-rate exposure. The radiation risk is better defined than previously based on the analysis of the incidence data classified by types of cancer, by age and sex at time of exposure. The high risk from thyroid cancer in children is consistent with the results of other studies (see thyroid cancer section). Data on cancer incidence and mortality from ongoing studies of the youngest survivors, all of whom are now over 60 years old, will be important as they emerge from studies. Although the risk of particular cancers posed by radiation is better described by incidence than by mortality, the number of documented cases in each disease category are still small, so the uncertainty range is wide. Continuing followup will be needed to increase confidence in disease-specific risk coefficients. The newest risk estimates are based on longer followup and better dosimetry. Thyroid Cancer Thyroid cancer is a relatively rare disease, with about 1,000 deaths certified and about 13 times as many new thyroid cancers reported each year in the United States (http://seer.cancer.gov/csr/1973_1998/thyroid.pdf, accessed February 17, 2005). A definite trend of increasing thyroid-cancer incidence during most of the last 60 years has been attributed in part to radiation therapy of the head and neck

OCR for page 73
Assessment of the Scientific Information for the Radiation Exposure Screening and Education Program International Commission on Radiological Protection (ICRP, 1998) for assessing the effect of autosomal dominant or recessive mutations on radiation-induced tumor frequencies. However, at the individual level, persons in such susceptible groups face the potential of an increased risk at the individual level. ICRP (1998) in its report Genetic Susceptibility to Cancer concluded: The principal conclusion by the Commission is that, on current knowledge, the presence of familial cancer disorders does not impose unacceptable distortions in the distribution of radiation cancer risk in typical human populations. For individuals with familial cancer disorders, radiation cancer risks relative to baseline are judged by the Commission to be small at low doses and insufficient to form the basis of special precautions. It seems likely however those risks to those with familial cancer disorders will become important at the high doses received during radiotherapy. NCRP in its Report 136 (NCRP, 2001, page 194) endorsed the ICRP statement on the effect of susceptibility mutations at the population level: The studies to date of the rare genetic mutations do not suggest they will have a major impact on total irradiated-population risk or on the shape of the dose-response. Again, it is the effect at the individual level that would probably be influenced by mutations for susceptibility to radiation-induced cancer at the high doses received during therapy. No information is available on a specific sensitivity to cancer induction by low doses received occupationally, medically, or environmentally of people who carry susceptibility mutations. However, for people with such mutations, exposure to a given dose of radiation might be more likely to induce a cancer but the individual’s baseline risk is also elevated. In recent years, the approaches for identifying single-nucleotide polymorphisms in the human population have improved substantially (Carlson et al., 2004; Belmont and Gibbs, 2004). In addition, recent studies have provided evidence of links between specific polymorphisms and increased disease outcome (Houlston and Peto, 2004). Such studies do not include radiation-induced cancers. However, given the prevalence of polymorphisms in the population and their relative frequency (over 1% by definition), scientists and risk assessors need to follow the research in this field to determine whether specific genetic polymorphisms can enhance individual risks of radiation-induced tumors. Minisatellite Alterations and Hereditary Risk Minisatellites are variable regions of DNA characterized by a series of repeat nucleotide sequences that usually occur in noncoding regions of DNA. Muta-

OCR for page 73
Assessment of the Scientific Information for the Radiation Exposure Screening and Education Program tions in minisatellite regions involve changes in the number of repeat sequences, and they are about 1,000 times more common than base-change mutations that occur in protein-coding genes. Because of their high mutability by ionizing radiation (for example, about 4% in the exposed people discussed in Dubrova et al., 2002a), minisatellite mutations have been proposed for use in measuring hereditary effects of radiation exposure. Dubrova and colleagues have conducted several studies on populations exposed to fallout from the Chornobyl accident (Dubrova et al., 1996, 1997, 2002b) and on families living in the vicinity of the Semipalatinsk nuclear test site (Dubrova et al., 2002a). They have demonstrated a 1.6- to 2.0-fold increase in minisatellite mutations in the offspring of irradiated parents. However, not only does the increase appear to be independent of the dose received, but also no mechanism has been identified by which radiation could induce such changes in the number of repeats in a particular minisatellite region. Using a similar technique, Weinberg et al. (2001) reported a 7-fold increase in repeat sequence mutations in people born to fathers who were involved in cleanup at the Chornobyl plant. However, Jeffreys and Dubrova (2001) responded by describing the method used by Weinberg et al. (2001) as unreliable and concluded that the mutants detected had to be validated. That has not been done, so the study by Weinberg et al. (2001) remains controversial. Other studies of radiation-exposed populations have failed to demonstrate an increase in minsatellite mutations in the offspring of exposed fathers. They include two studies of Chornobyl cleanup workers (Livshits et al., 2001; Kiuru et al., 2001) and a study of the offspring of the Japanese atomic-bomb survivors (Kodaira et al., 1995). In addition, no evidence of increased minisatellite mutations was observed in the sperm of radiotherapy patients sampled at various times after treatment (May et al., 2000). The UK National Radiological Protection Board (NRPB) has recently commented on the studies conducted at Semipalatinsk by Dubrova et al. (2002a), noting that although all other studies have had negative results or been methodologically flawed, Dubrova et al. (2002a) provides the most convincing demonstration to date of a radiation-induced effect on minisatellite mutation frequency (Bouffler and Lloyd, 2002). However, in concluding that, Bouffler and Lloyd (2002) noted Dubrova et al. (2002a) reported a 1.8-fold increase in minisatellite mutation frequency for doses cited as greater than 1 Sv. That value is broadly consistent with the genetic doubling dose of 1 Sv used by ICRP (1991) and UNSCEAR (2001). In a recent comprehensive review of the basis and derivation of genetic risks, UNSCEAR (2001) discussed the work of Dubrova and colleagues and concluded that “minisatellite variations very rarely have phenotypic effects.” UNSCEAR did not include data on minisatellite mutations in its genetic risk estimates. Where associations between minisatellite variations and phenotypic effects have been

OCR for page 73
Assessment of the Scientific Information for the Radiation Exposure Screening and Education Program found, they have been for multifactorial diseases whose complex etiology involves multiple genes and interaction with environmental factors. Such diseases are far less responsive to an increase in mutation rate than those due to single gene mutations (UNSCEAR, 2001). Bouffler and Lloyd (2002) concluded that minisatellite mutations are unlikely to affect the incidence of heritable disease substantially. We can conclude that no new evidence on radiation-induced minisatellite mutations has been published that requires revision of the human heritable risk posed by radiation exposure. In summary, recent studies in cellular and molecular radiation biology are providing new insights into how radiation interacts with cellular components and how signals can be transferred from “hit” cells to “unhit” ones. The information should improve understanding of the underlying cellular changes that might be involved in the induction of mutations and how the changes are related to an excess risk of cancer or hereditary disorders after radiation exposure. In this context, radiation risk assessments and consequent risk estimates are disease-based. That is, they are derived from the findings of epidemiologic studies of exposed human populations buttressed by the results of experimental studies of irradiated laboratory animals. Thus, they do not rely directly on mechanistic considerations. Furthermore, risk-assessment approaches are supported by information on the dose-response relationships obtained for a variety of mutational end points known to be associated with carcinogenesis and hereditary effects. Reviews by various authoritative international and national scientific bodies of the risks to health arising from exposure to low doses of radiation have included knowledge of potential novel biologic mechanisms; they include the recent NCRP review that led to Report 136 (NCRP, 2001). None of those reviews concluded that the epigenetic phenomena require modification of the LNT dose-response model that forms the basis of current risk estimates. A move to a more biologically based risk-assessment approach would require consideration of potentially confounding factors for low-dose response. With respect to genetic susceptibility to radiation-induced tumors, the current position of both ICRP and NCRP is that the effect of susceptibility mutations on population risk would be very small. For individual risk, there would be a minor effect of susceptibility mutations at low doses; they might have a much larger effect at the high doses received in therapy. The effect of single-nucleotide polymorphisms on sensitivity to tumor induction by radiation is not known. Conclusions The committee concludes, on the basis of recent data on radiation-induced responses at the cellular and molecular levels discussed in this section, that current cancer risk estimates do not need revision. That conclusion is also based on the fact that current risk estimates are developed directly from human tumor frequencies.

OCR for page 73
Assessment of the Scientific Information for the Radiation Exposure Screening and Education Program The committee further concludes that continued monitoring of research in cellular and molecular radiation biology as related to radiation-induced cancer risk is needed. RECENT DEVELOPMENTS IN RADIATION DOSIMETRY AND RADIATION DOSE AND RISK ASSESSMENT Radiation Dosimetry Estimates of health risks to exposed cohorts in the HRSA program have historically been obtained from dose assessment or retrospective dosimetry. This was necessary because many of the people, in particular downwinders, did not have personal dosimeters and there was a lack of comprehensive workplace or environmental monitoring. Reconstructing the external dose requires information on fallout deposition patterns, life styles, shielding by building materials and dose conversion factors. Reconstructing the dose from internal emitters involves detailed studies of the movement of the deposited radionuclides through the food chain into the body and the resultant organ doses obtained by using physiologically based pharmacokinetic (PBPK) models. Descriptions of these procedures for fallout are presented by Bouville et al. (2002) and Simon and Bouville (2002). There are continuing efforts to update conversion factors relating radioactivity to dose for both internal and external exposures. Conversion coefficients for external radiation for use in radiologic protection have been revised by ICRP (ICRP, 1996) and the US Environmental Protection Agency (EPA) (Eckerman and Ryman, 1993). A summary of procedures for dose estimation from radionuclides in the environment has also been published by the International Commission on Radiation Units and Measurements (ICRU) (ICRU, 2002). Doses from internally deposited radionuclides were estimated with physiologically based pharmacokinetic (PBPK) models, such as those developed by ICRP (1979). Dose-conversion factors for internal deposition of radionuclides have been revised by EPA (Eckerman et al., 1988). Tissue weighting factors, wT are defined as the fractions of stochastic risk of carcinogenesis or hereditary effects resulting from radiation exposure of organ T, relative to the total risk posed by uniform exposure of the entire body (ICRP, 1991). The most recent accepted values are shown in Table 3.7. Modifications to wT are being reviewed by ICRP on the basis of the latest assessment of cancer incidence from epidemiologic studies. The wT for the gonads may be reduced by a factor of 5 to 0.04. The value of wT may increase for breast cancer from 0.05 to 0.12. There should be no changes in the wT proposed for thyroid cancer or respiratory cancers. There has been complete revision of the model for the human respiratory tract (ICRP, 1994) and of basic anatomic data on the skeleton (ICRP, 1995).

OCR for page 73
Assessment of the Scientific Information for the Radiation Exposure Screening and Education Program Collectively, the revisions in dose-conversion factors and other dosimetry measures should reduce uncertainty in estimates of dose, but they will not substantially change the general assessment of risk to cohorts in the HRSA program. The revised PBPK model for the human respiratory tract does not include dosimetry for inhalation of radon or the short-lived descendants of radon that are referred to as radon daughters. Historically, the risks posed by radon have been related to the time-integrated concentration of potential alpha energy from short-lived radon daughters, usually expressed in working level month (WLM). The committee does not expect that practice to be revised. Any changes in risk estimates associated with radon will be related to radiation biology or observed cancer incidence rather than to a revised paradigm for dosimetry. The most comprehensive database of risks associated with external exposure from ionizing radiation is the Life Span Study of Japanese atomic-bomb survivors conducted by the Radiation Effects Research Foundation. Previous estimates of risk were related to dose assessments for each person according to a system called DS86 (NRC, 1987). In 2001, a National Research Council report made recommendations regarding revisions to DS86 to reduce uncertainty in dose assessments (NRC, 2001). The revisions have been completed and will be published as DS02 in 2005. The protocol has been used to obtain revised estimates of dose for each person in the study. The new data indicate that cancer-mortality risk factors (relative risk per unit dose) will decrease by about 8% because of changes in dosimetry. That is principally because of an increase in the gamma-ray dose for both cities. There are, however, no changes in the apparent shape of the dose-response curve or the age and time-since-exposure patterns of risk. Efforts are under way to evaluate and reduce uncertainties in the risk estimates for mortality and to develop risk estimates for cancer incidence (Preston et al., 2004). The risk of thyroid cancer in people exposed to 131I has now been conclusively demonstrated as a result of the 1986 Chornobyl accident. The Institute of Medicine and National Research Council discussed the early skepticism that met reports of increased thyroid-cancer incidence at Chornobyl and the later findings showing that irradiation of the thyroid by 131I is almost, if not equally, as effective as irradiation by external radiation (IOM-NRC, 1999). Most of the radiation-induced thyroid cancers incurred after that accident are papillary thyroid cancers, the latent period is short, and there are indications that they are more aggressive than usual. Recent findings on the dosimetry of 131I exposure from Chornobyl and its related risk may clarify uncertainties in estimating their health effects. Radiation Dose and Risk Assessment National Cancer Institute 1997 131I Study Since RECA was enacted in 1990, the National Cancer Institute (NCI) has completed a comprehensive study of radiation doses to the thyroid from 131I

OCR for page 73
Assessment of the Scientific Information for the Radiation Exposure Screening and Education Program released from tests at the Nevada Test Site (NTS) (NCI, 1997). The study uses fallout measurements, atmospheric modeling, and statistical analysis to estimate 131I fallout deposition density in each county of the continental United States and the corresponding radiation doses to the thyroid for each atmospheric test at the NTS. NCI took into account the ages of those at risk of exposure and their consumption of milk and other foodstuffs. NCI presented its results in tables and in a series of maps showing the doses in all counties for four milk-consumption rates for people born in selected years from 1930 to 1962; NCI also produced maps showing doses for different test series. NCI has provided the committee with updated versions of several of the maps, and we present them in this report. The maps show the radiation doses to people by county. They include the latest revisions to the dose calculations and show the doses based on contours. Thus, they offer a more accurate representation than the earlier NCI maps in that the doses are based on where the fallout was deposited and did not stop at county boundaries. Figure 4.1 shows the estimate of the dose to the thyroid of a child born on January 1, 1951, for average milk consumption. Estimated doses range from less than 1 mGy to greater than 100 mGy. The map gives the total thyroid dose from both external and internal radiation. The great majority of the dose, however, is from the ingestion of 131I in foodstuffs, particularly milk. The dose to the thyroid from 131I depended significantly on the age of the person when the exposure was received. Because of the relatively higher uptake of iodine in young children and the smaller thyroid, which resulted in a greater FIGURE 4.1 Geographic distribution of estimated total (external + internal) dose (mGy) from all NTS tests to the thyroid of children born on 1 January 1951 and who were average milk drinkers (map courtesy of National Cancer Institute).

OCR for page 73
Assessment of the Scientific Information for the Radiation Exposure Screening and Education Program iodine concentration, the thyroid dose in young children is higher than that in other age groups for the same amount of fallout and the same dietary intake of 131I. The dependence of dose on age at exposure tends to decrease with age until adulthood when dose varies little with age. Because of the dependence of dose on age at exposure in young children, maps of thyroid doses to people born at other times may differ from Figure 4.1. Such maps—for example, for a person born on January 1, 1954—would reflect the influence of 131I deposited in fallout from tests occurring while that person was a small child. For comparison, Figure 4.2 shows the thyroid doses from all tests at the NTS to people who were adults during the time of nuclear testing. Figures 4.1 and 4.2 show that people living in many parts of the United States, not just those living near the NTS, received high thyroid doses as a result of nuclear tests. For example, for children born on January 1, 1951, thyroid doses in areas in Idaho, Montana, and Colorado, were also higher, and thyroid doses in other areas, such as the Midwest and up-state New York and Vermont, were elevated. Much of the geographic distribution is due to the dynamics of 131I. First, once away from the NTS, 131I is deposited mainly through precipitation (“wet” deposition), so areas that receive precipitation when a fallout cloud is passing overhead are more likely to have high deposition. Second, once it is deposited, the main exposure pathway is ingestion of milk from cattle or goats that grazed on pasture that received the fallout. Consequently, thyroid doses tend to be FIGURE 4.2 Geographic distribution of estimated total (external + internal) dose (mGy) from all NTS tests to the thyroid of those of adult age at the time of exposure and who were average milk drinkers (map courtesy of National Cancer Institute).

OCR for page 73
Assessment of the Scientific Information for the Radiation Exposure Screening and Education Program elevated in areas receiving fallout through wet deposition where entry into the milk pathway is possible. On the basis of the 1997 study (NCI, 1997), NCI developed a 131I dose calculator that was published on the Web at http://ntsi131.nci.nih.gov/. The user supplies date of birth, sex, locations and dates of residence, and milk-consumption pattern. The calculator then uses the results of the 1997 study to estimate the thyroid dose from 131I and its 90% credibility interval. Institute of Medicine-National Research Council Review of 1997 National Cancer Institute 131I Study The NCI 131I study was reviewed by an Institute of Medicine-National Research Council committee in 1999 (IOM-NRC, 1999). That committee stated that the NCI approach was generally reasonable, but found that the county-specific estimates of thyroid dose were too uncertain to be useful in estimating individual doses. Individual doses depend strongly on specific variables, such as age at exposure and amount of milk consumed, which are not considered in the county doses. Estimating individual doses is possible but highly uncertain because important data are not available or are of questionable reliability. The committee also observed that there was little epidemiologic evidence of a widespread increase in thyroid cancer. Centers for Disease Control and Prevention-National Cancer Institute 2001 Draft Feasibility Study In 2001, the Centers for Disease Control and Prevention (CDC) and NCI published a draft feasibility study of the health consequences of nuclear-weapons testing on the American population (CDC-NCI, 2001). The report considered all radionuclides that contributed substantially to the radiation dose, and estimated the effective dose and the dose equivalent to the organs at risk. Both NTS fallout and global fallout were considered. Global fallout included not only the fallout from American tests but also the contribution from tests of other nations. The draft feasibility study concluded that a full dose assessment was possible but that it would be a major effort comparable with the NCI 131I study discussed above. The CDC-NCI study used the 131I fallout deposition densities found in the 1997 NCI study as a starting point to calculate the deposition densities from NTS fallout of the 33 other radionuclides that contributed substantially to the radiation dose. The study then calculated the doses from both internal and external exposure to those radionuclides. Only the dose to the thyroid from 131I resulted in an internal radiation dose that substantially exceeded the dose from external radiation. As a result, most organ doses (except to the thyroid) were roughly the same.

OCR for page 73
Assessment of the Scientific Information for the Radiation Exposure Screening and Education Program NCI has prepared updated maps showing the current best estimates of dose to various organs and made them available to the committee. The maps are used in this report rather than the original maps in the 2001 draft feasibility study. In Figures 4.3 and 4.4, the dose to the red bone marrow from nuclear tests at the NTS is shown as representative of the other organ doses for an adult and a child born on January 1, 1951. Both the 1997 NCI study and the 2001 CDC-NCI draft feasibility study estimated doses to the thyroid from 131I at the NTS. Although the study results are similar, they are not identical. The difference is discussed briefly in the 2001 report and attributed to differences in estimating the amount of fallout retained by vegetation. In addition, the 2001 results are preliminary in that they are for the draft feasibility study, and did not include uncertainties. In addition to estimates of doses from NTS fallout, the 2001 draft feasibility study evaluated doses from global fallout. The doses to red bone marrow were found to be slightly higher from global fallout than from NTS fallout but less for the thyroid. The 2001 CDC-NCI draft feasibility study presents maps similar to Figures 4.3 and 4.4 for global-fallout red marrow doses by county in the United States (CDC-NCI, 2001). The study did not estimate the 131I doses to the thyroid from global fallout by county (although some 131I was occasionally present), because of lack of data. The 131I doses were given for the United States as a whole. Doses from 3H and 14C, which affect the hydrological and carbon cycles, respectively, FIGURE 4.3 Geographic distribution of estimated total (external + internal) dose (mGy) from all NTS tests to the red bone marrow of children born on 1 January 1951 (map courtesy of National Cancer Institute).

OCR for page 73
Assessment of the Scientific Information for the Radiation Exposure Screening and Education Program FIGURE 4.4 Geographic distribution of estimated total (external + internal) dose (mGy) from all NTS tests to the red bone marrow of those of adult age at the time of exposure (map courtesy of National Cancer Institute). were also estimated for the United States as a whole. The report notes that the proportion of global fallout due to US weapons testing can be roughly determined from the fission yield of the US tests relative to the total fission yield from all high-yield nuclear testing. National Research Council Review of Centers for Disease Control and Prevention-National Cancer Institute 2001 Draft Feasibility Study The study was reviewed by a National Research Council committee that published its report in 2003 (NRC, 2003c). Among its conclusions and recommendations, the National Research Council found the following: The 131I fallout data and the resulting dose and thyroid-cancer risk should be reanalyzed to include the new dosimetry and risk estimates from Chornobyl to update the 1997 NCI report. On the basis of the results of the draft feasibility study, further work with fallout radionuclides other than 131I would not be warranted, because of the very low levels of associated exposure and the uncertainties in their distribution over time and location. In agreement with the authors of the draft feasibility study, the dose and risk estimates that were presented were developed as population averages and should not be used to estimate risks to specific individuals.

OCR for page 73
Assessment of the Scientific Information for the Radiation Exposure Screening and Education Program A program should be established to examine and archive fallout-related documents from sites operated by the Department of Energy and the Department of Defense and other relevant sites. National Cancer Institute and Centers for Disease Control and Prevention Working Group 2003 Revision of NIH Radioepidemiology Tables An NCI and CDC working group (NCI-CDC, 2003) reviewed and revised the 1985 National Institutes of Health radio-epidemiology tables (NIH, 1985). The revision was principally based on the 1958–1987 Life Span Study Tumor Registry data on the atomic-bomb survivors at Hiroshima and Nagasaki. The computer program Interactive Radio-Epidemiological Program (IREP, version 5.3) incorporated the results of this work to give probability of causation/assigned share values for individual radiation exposures. Risk coefficients and associated PC/AS values in some cases have been substantially changed, both from the original NIH tables and from their 1988 revision by the Committee on Interagency Radiation Research and Policy Coordination (CIRRPC) (CIRRPC, 1988). The NCI-CDC report used cancer-incidence data on the atomic-bomb survivors, rather than the cancer mortality data on most cancers used for the 1985 report. For thyroid cancer, the NCI-CDC report used a compilation of seven studies (Ron et al., 1995), which was considerably more extensive than that used by the NIH report. CIRRPC also assumed that, for a particular cancer, an applicant had a low baseline risk at the 10th percentile of the cancer risk distribution and that the ERR varied inversely with the baseline risk. The NCI-CDC revision did not use those assumptions, which had accounted for a factor of two increase in the ERR for most cancers (NCI-CDC, 2003). CONCLUSION This chapter has presented the results of recent studies in radiation epidemiology, biology, and dosimetry. The overall aim is to develop a database that forms part of the consideration of new populations or geographic areas for coverage by RECA. Chapters 5 and 6 consider the issue of additions to RECA.