Whether we look at all-cause or disease-specific mortality, use standardized mortality ratios or proportional hazard modeling, adjust for a host of covariates or examine crude rates, the analysis hinges on having correctly ascertained mortality data. We need to know who died, when, and of what cause or causes.
Two pieces of information are essential: vital status (alive, dead, unknown) and cause of death (for those dead). We measure the data quality of these items on two scales: completeness (known, unknown) and correctness as determined by corroboration from other sources.
In this chapter we first describe this study's mortality ascertainment procedure, as planned given various constraints, and as adapted due to unforeseen and insurmountable obstacles. Appendix F consists of data illustrating the degree of success in achieving completeness and correctness. In closing this chapter, we discuss what effects that may have on the analysis and its interpretation.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 44
--> 9 Mortality Ascertainment Whether we look at all-cause or disease-specific mortality, use standardized mortality ratios or proportional hazard modeling, adjust for a host of covariates or examine crude rates, the analysis hinges on having correctly ascertained mortality data. We need to know who died, when, and of what cause or causes. Two pieces of information are essential: vital status (alive, dead, unknown) and cause of death (for those dead). We measure the data quality of these items on two scales: completeness (known, unknown) and correctness as determined by corroboration from other sources. In this chapter we first describe this study's mortality ascertainment procedure, as planned given various constraints, and as adapted due to unforeseen and insurmountable obstacles. Appendix F consists of data illustrating the degree of success in achieving completeness and correctness. In closing this chapter, we discuss what effects that may have on the analysis and its interpretation.
OCR for page 44
--> Procedure Vital Status Ascertainment Department of Veterans Affairs (VA) records are the core source of vital status information.20 We begin with the Beneficiary Identification and Records Locator Subsystem (BIRLS), a computer file of VA transactions concerning benefits to individual veterans. Of particular interest to this study is the recording of death benefit requests. The BIRLS database fields include: name, claim number, claim folder location, Social Security Number, military service number(s), date of claim, claim folder location, and date of death.21 BIRLS can be searched by automated routines using a standard protocol or by "hand," using whatever criteria the analyst seated at the terminal chooses. Veterans were placed in one of three categories, depending on the success and findings of the BIRLS search. BIRLS record was found with reference to a death. BIRLS record was found with no reference to a death. BIRLS record was not found. If the BIRLS record for an individual was not found, the next source searched was the Veterans Administration Master Index (VAMI). Now catalogued on microfilm, and replaced by the electronic BIRLS in the early 1970s, the VAMI had been maintained manually on 3 × 5 index cards, one for each veteran with a benefits claim. In addition to mortality information, VAMI is a source of other identifying information (e.g., different spelling of a name) that made subsequent BIRLS searches successful. VAMI searches must be done by hand. Federal databases other than those maintained by the VA served as sources of vital status ascertainment. The Health Care Financing Administration (HCFA) of the Department of Health and Human Services (DHHS) searched its computerized database on Medicare enrollees and provided vital status on all reasonable to good potential matches, based on Social Security Number (SSN), date of birth, and name. As we discuss later in this section, HCFA information was used as one measure of the completeness of VA-based (BIRLS and VAMI) death information. All veterans not found in BIRLS plus a sample of veterans 20 Although JAYCOR (for the Defense Nuclear Agency) constructed the participant cohort and MFUA staff constructed the comparison cohort, MFUA followed identical protocols for vital status follow-up of members of both cohorts, using the same data sources and search algorithms. 21 Data fields available on BIRLS not relevant to this study are: insurance file and policy numbers, death in service, cause of death, power of attorney, dates of entry onto and release from active duty, branch of service, character of service, separation reason code, paygrade, and nonpay days.
OCR for page 44
--> found as Dead and found with no mention of death (presumed alive) were searched against the HCFA database for vital status. The National Death Index (NDI), maintained for research purposes by the National Center for Health Statistics since its 1979 inception, assembles state death records. NDI searches its database by year for potential matches based on name, month and year of birth, and SSN. NDI provides the death certificate number—thereby confirming the fact of death—and the state possessing it. After all available sources of vital status information were plumbed, the BIRLS-based categories were redefined into three categories for use in this study's analyses: Dead, Alive, and Lost to Follow-Up. If there was evidence of an individual's death, that veteran is coded "Dead"; if the veteran was found in BIRLS without reference to death, he was coded "Alive"; if a veteran was not found at all in BIRLS, he was coded "LTFU.'' In the mortality analyses, we later collapse "Alive" and "LTFU" into a "Not Known Dead" category. (See Chapter 9.) Although this loses some detail about follow-up, it is the most accurate: a veteran is "Dead" or "Not Known to be Dead.'' Cause and Date-of-Death Ascertainment For individuals found in BIRLS and identified there as dead, BIRLS provided a date of death and VA claims folder location. MFUA requested a copy of the death certificate from the claims folder location and sent all death certificates to a certified nosologist, a specialist in classifying diseases. The nosologist wrote the four-digit ICD-9 codes for each cause of death listed on the death certificate, specifying one entry as the underlying cause of death and any others as associated causes of death. Using the National Death Index (NDI)-provided death certificate number and state location, researchers can request, for another fee and a lengthy application assuring privacy protections, copies of the certificates from individual states. MFUA used NDI and state follow-up data as much as frugal efficiency would allow. In addition to their use in validating VA information, NDI findings also were used to fill gaps in VA data. State vital records offices provided the death certificates of 908 veterans identified through NDI searches. Summary of Mortality Ascertainment Quality and Its Potential Effects on Valid Interpretation of the Findings VA databases were the primary sources of vital status information in this study. In order to confirm that these sources were appropriate, we checked them
OCR for page 44
--> against two other sources: the Health Care Financing Administration (HCFA) and the National Death Index (NDI). In addition to this assurance that our mortality ascertainment through the VA sources was appropriate, we also checked on the consistency of our cause of death coding. These quality assurance procedures are documented in Appendix F. The results are discussed below. Information on cause and date of identified deaths is available for 87.7 percent of the deaths included in this study's analyses (Appendix F, Table F-3). From our comparison of VA mortality data with that of NDI and HCFA, we found that our use of VA records to determine vital status is well justified. Use of other sources for variously selected samples did not yield substantially different results. In addition, reliability of death coding according to the ICD-9 classification system, demonstrated by discrepancy rates of 4 percent for all-cause and 0.6 percent for leukemia (the primary hypothesized radiation-associated cancer), was good. Discussion in Appendix F covers how this level of data quality was the best we could obtain for this study. Here, we discuss how the characteristics of our data, as well as the characteristics of mortality data in general, influence our interpretation of this study's findings. Both HCFA and NDI validation samples indicate that MFUA's reliance on VA data sources yields only a small number of missed alleged deaths. HCFA and NDI, in turn, miss some VA-alleged deaths. None of the databases is a true standard. What may be more important to the interpretation of our findings is that the VA-missed (and, therefore, MFUA-missed) deaths are more prevalent among the participants than the controls. If participating in Operation CROSSROADS were associated with increased mortality, a differential in ascertainment between participants and controls could influence risk ratios if the differential were large enough relative to the expected numbers of deaths. The difference between 3.6 percent and 2.2 percent missed deaths (see Table F-8) is very small. For this study of 73,704 veterans with 22,896 deaths (as of 31 December 1992), we lose 344 deaths (1.5 percent × 31.1 percent × 73,704). If those missing deaths are evenly distributed across causes of death—and we have no evidence that they are not—the relative distribution of cause-specific mortality would not be affected and the risk ratio for all-cause mortality would diminish only slightly. The less than 1 percent discrepancy in leukemia underlying cause-of-death coding is reassuring not only because of its small size, but also because it is likely to be dwarfed by other, unmeasured errors in cause-of-death coding. What is noted on a death certificate might be determined by—aside from the actual cause—competing diagnoses, inaccurate diagnoses, privacy concerns of the individual or the community, professional background of the signer,
OCR for page 44
--> geographic practice patterns, and the decedent's health history and earlier access to medical care. Cause of death as documented on a death certificate also varies over time: diagnoses come in and out of favor, and medical technology or knowledge may revise diagnostic criteria and categories. Over the course of the 50 years of mortality follow-up for this study, for example, terminology has been revised regarding lymphosarcoma and various dementias. The basic comparisons in this study are based on all-cause mortality, which is generally a more accurate measure than cause-specific mortality. As for the specific cause of most interest in this study—leukemia—the reliability of that coding is excellent. In summary, we found that our mortality ascertainment was very complete and well-balanced between participants and controls. If we were to take NDI and HCFA results as reliable indicators of missing deaths, the impact on our crude mortality would be either to increase participant mortality about 1.2 percent (using HCFA data) or to decrease it by 0.4 percent (using NDI data) relative to controls. Finally, the recoding of mortality causes suggests that any error induced from coding will be very small in comparison to other possible sources of error. We believe these mortality data may be used with confidence.