Methodological Issues in Developing Community Health profiles and Performance Indicator Sets
Michael A. Sotto
The focus of this report is on developing the conceptual framework for a community health improvement process (CHIP) by which communities can use health profiles and performance measures to marshal the forces in their communities to improve the health of populations. Because of this focus, much of the committee's attention to the development of community health profiles and performance indicator sets has been focused on content issues. When it comes to the implementation of these concepts in actual communities, however, a number of practical, methodological issues often arise. Because the development of measures must depend on local circumstances as well as the available data, this appendix cannot provide cookbook solutions to statistical issues. Rather, it discusses these issues so as to inform those wishing to develop performance measures in local communities.
To address these points, this appendix draws on positive and negative examples from Healthy People 2000 (USDHHS, 1991), as discussed in ''Public Health Assessment in the 1990s" (Stoto, 1992b). The objectives in Healthy People 2000 are not performance measures per se, but that report is a major point of reference for many in public health and its objectives do provide starting points for CHIPs to consider. Most of the objectives in Healthy People 2000 are carefully written, but others illustrate a number of methodological and statistical problems that should be avoided.
SPECIFICATION OF PERFORMANCE INDICATORS
Performance measures must be carefully specified so that they truly measure the performance of accountable entities rather than other changes in a community's health. If the results are to be interpreted with confidence, careful development and testing are needed to ensure that the objectives are operationalized in a clear and unambiguous way. For the HEDIS (Health Plan Employer Data and Information Set) measures, for instance, substantial time and effort was required to develop precise definitions that make sense in a variety of managed care settings and are obtainable from readily available data files (NCQA, 1993). Even a measure that seems simple, such as the proportion of children at age 24 months who have received all of the recommended immunizations, requires agreement on which immunizations are recommended at what time, decisions about whether to include children who have not been covered by the health plan since birth, and so on.
Performance measures must be written in a statistically operational form. When they are not, it can be difficult to tell what progress is being made, even if all of the information is in hand. For example, Healthy People 2000 Objective 7.17 calls for local jurisdictions to have "coordinated, comprehensive violence prevention programs." Although a long list of attributes of coordinated and comprehensive programs is given in the text, no operational definition is provided by which to judge whether a particular jurisdiction's program is coordinated and comprehensive. It would be better to identify a small number of performance indicators connecting accountable entities to specific actions, as illustrated in the committee's prototype violence indicator set (Appendix A).
Problems of this sort often arise when one does not distinguish between general health issues and operational measures of these issues. Rarely are data available in the precise form policymakers prefer, so concessions must be made to data constraints. The presentation of the performance measures should reflect this compromise by separately identifying the issues to be monitored and the best available data or proxy variables for those issues and by stating targets in terms of the measurable quantities. For example, Healthy People 2000 measures the initiation of cigarette smoking by children and youth as the proportion of cigarette smokers in the 20–24 age group. The actual measure is smoking prevalence, not initiation. This is appropriate, however, because prevalence rates are easier to obtain from population surveys and
because initiation rather than cessation is thought to be the dominant force for young people.
Measures should be both valid and reliable and both sensitive and specific (Sofaer, 1995). Practical problems often require compromises in these respects. Healthy People 2000 Objective 15.1 on coronary heart disease exemplifies the problem. The objective addresses the coronary heart disease mortality rate because this component of overall cardiovascular mortality is the most amenable to prevention efforts. The specific grouping of diagnostic codes used to define coronary heart disease is not, however, routinely available in vital statistics reports. In many communities, it might be more appropriate to measure progress in terms of readily available cardiovascular mortality rates, while bearing in mind that reduction beyond a certain point is unlikely.
Lacking population-based data on the incidence or prevalence of specific diseases, performance measure developers often think about using numbers of people receiving treatment for the disease in question. For instance, Healthy People 2000 Objective 15.3 calls for a reversal in the increasing number of people with "end-stage renal disease (requiring dialysis or transplantation)." The baseline figures cited, however, count the number of people receiving dialysis or transplantation, not those requiring it. Thus, these trends reflect changes in diagnostic and treatment patterns as well as access through an expanding federal program, and it is doubtful whether future changes in the data can be attributed to the success of prevention activities as intended by Healthy People 2000. In certain circumstances, however, hospital treatment data can yield appropriate performance measures. For instance, an Institute of Medicine (IOM, 1993) report on measuring access to health care identifies a number of "ambulatory care sensitive conditions" for which hospital admissions should be avoidable if individuals have access to appropriate ambulatory care.
UNIT OF ANALYSIS
When a performance measure calls for action by a number of similar entities in the community such as schools, work sites, and health care plans, there are basically two ways to create performance measures. A community can measure the proportion of entities taking the action, as in Objective 3.11 of Healthy People 2000:
Increase to at least 75 percent the proportion of worksites
with a formal smoking policy that prohibits or severely restricts smoking at the workplace.
With a measure worded in this way, a small community with only four work sites could meet the goal if the three smallest sites had smoking policies. If, however, the one work site without a policy was very large, only a minority of the community would receive the benefits of nonsmoking policies. Alternatively, a community can measure the proportion of people affected by an action, as in Objective 1.8 of Healthy People 2000:
Increase to at least 50 percent the proportion of children and adolescents in 1st through 12th grade who participate in daily school physical education.
In technical terms, the difference between these two types of measures is that the latter can be thought of as a weighted average, where the weights correspond to the number of students in each school. In practical terms, the first sort of measure can be obtained simply from a survey of a small number of work sites, schools, or similar entities. The second requires a population survey or at least some calculations based on numbers of people associated with each entity. The first type of measure also tends to suggest that the impetus for action is with the work site (or school), whereas the second focuses on the individual.
INTERPRETATION OF SURVEY DATA
Population-based health interview surveys provide many of the health status measures that are used in Healthy People 2000 and are potentially available for performance measures. Trends in health interview data, however, can be difficult to interpret (Wilson and Drury, 1984). The U.S. National Health Interview Survey (Adams and Marano, 1995), an important source of data for the year 2000 objectives, measures the annual incidence of acute conditions and the prevalence of chronic conditions through a combination of open- and closed-ended questions about the presence of specific diseases and conditions. A common finding from these data has been that chronic illness and disability have been increasing at the same time that mortality (even for related diseases) has been falling. At least part of this increase does not reflect actual worsening in physical illness. Methodological explanations that may account for the trend include (1) improved sur-
vey design that may have increased the proportion of the population reporting diseases and conditions that exist; (2) improved access to medical care and better screening efforts that may have increased the proportion of the population diagnosed with and therefore aware of asymptomatic disease; and (3) changing role expectations and improved disability benefits that may have increased the proportion of the population reporting a work-related disability.
Complex questions are also difficult to monitor through population surveys. Consider, for example, Healthy People 2000 Objective 5.8:
Increase to at least 85 percent the proportion of people aged 10 through 18 who have discussed human sexuality, including values surrounding sexuality, with their parents and/or have received information through another parentally endorsed source, such as youth, school, or religious programs.
Although survey data could provide information on aspects of this objective, specific questions would have to be designed to assess the proportion of adolescents that meet the specific criteria implied.
COMPARISONS ACROSS TIME AND COMMUNITIES
To assess the meaning of performance measures, CHIPs can examine trends over time or can compare their results with an externally set benchmark or with other communities in the same state. Each of these comparisons can provide valuable insights, but this requires that measures be operationalized in a way that will permit meaningful comparisons. Health outcomes measures from hospitals or health care systems, for instance, should be risk adjusted so that they do not inadvertently attribute variations that are a function of population case mix or severity of illness to differential system performance (Sofaer, 1995). Even something as simple as population estimates for use as denominators in rates must be carefully examined. In Massachusetts, for example, adolescent fertility rates calculated using state demographic estimates were found differ substantially from those obtained when population estimates from national sources were used (D.K. Walker, personal communication, 1996). When comparing across states, denominator data for all should be from the same source.
Healthy People 2000 presents numerical targets for most of
the national objectives, which can provide a starting point for local benchmarking. To determine meaningful local benchmarks, CHIPs must, in addition to standardizing for population composition, take into account differences from national values in baseline rates and trends in the measures in question. Benchmarks can also be set by comparison with other geographic areas or with epidemiological models that account for important risk factors in the population.
There are a number of statistical models that can help CHIPs set meaningful benchmarks. None of these can be used on a strictly mechanical basis, and all require significant subject matter judgment. These methods can, however, give some idea of what is likely to happen in the absence of further interventions or indicate the likely impact of interventions on outcomes. Thus, models can help to set or to fine-tune the benchmarks.
The most straightforward statistical model is simple trend analysis. Such models can predict the level of various objective measures—assuming that current trends continue—as well as provide statistical confidence intervals. Benchmarks should usually be somewhat more favorable than the results that trend analysis suggests will be achieved without any intervention (Stoto, 1989).
Models that identify the lowest possible morbidity and mortality rates observed in specific groups could also be useful in setting targets. Such groups could be other countries or geographic, racial, ethnic, or socioeconomic subpopulations of the United States. Woolsey (1981), for instance, has proposed a version of this. Hahn and colleagues (1990) have estimated the possible reduction in mortality rates that can be expected with the elimination of the most important risk factors for chronic disease.
Mathematical models that relate health outcomes to specific interventions for many specific diseases and health behaviors can also be helpful in setting benchmarks. For instance, the National Cancer Institute has developed a model to project cancer incidence and mortality under various cancer control programs such as prevention programs, screening, and treatment (Levin et al., 1986). Such models require more data than simple trend analyses and take time to develop and verify. In addition, there can be substantial uncertainties in modeling interventions and the interactions among them. The modeling process itself, however, helps to focus discussion and thinking, and leads to a range of plausible benchmark values. Similar models have been, or are being, developed for cardiovascular disease, AIDS, and other diseases (e.g.,
Weinstein et al., 1987). Using such models as appropriate, Closing the Gap (Amler and Dull, 1987) synthesizes much of what is known about the potential health effects of health promotion and disease prevention.
Simple extrapolation models and process models such as the one for cancer form two extremes of a spectrum. Extrapolation models that take into account age-period-cohort effects, projected demographic changes, and other factors (Brown and Kessler, 1988) fall between the two and offer some promise.
DATA FOR LOCAL AREAs
If performance monitoring is to achieve its potential for community health improvement, communities of all sizes—states, counties, municipalities, and other groups such as a company's employees and their families—must adopt their own objectives and measure progress toward them. Counties, cities, and smaller communities, however, often find that local-level data are unavailable or of poorer quality than national data. For instance, in assessing the ability of states to monitor the draft year 2000 objectives prepared in 1989, the Public Health Foundation (1990) found that, on average, states could monitor only 39 percent of the objectives, and the situation is clearly worse for smaller communities. Obtaining community-level data for specific racial, ethnic, and socioeconomic groups is even more difficult.
CHIPs will generally not be able to obtain appropriate data simply by disaggregating national survey data. No national survey is likely to have a large enough sample to provide reliable direct estimates for all of the subpopulations of interest. Furthermore, up-to-date community-level denominator data by race, ethnicity, and socioeconomic status are not generally available from the U.S. Census Bureau. Rather than a single national survey, survey methodologies that can be replicated easily at the community level need to be developed. The Behavioral Risk Factor Surveillance System (BRFSS), developed by the Centers for Disease Control and Prevention (CDC) but implemented by the states (Siegel et al., 1993), might serve as a model.
Even when data are available for small geographical areas, as they are for vital statistics, the events are infrequent, thereby making the rates unreliable. One approach to the problem of sparse data is to use measures that are stable at the local level as proxies for measures used in the national objectives. For instance, a local health department might choose to monitor infant
health in terms of the proportion of low birth weight babies rather than the infant mortality rate. Because the proportion of babies born with low birth weight is higher than the proportion who die, this rate is more reliable for small areas. In choosing such proxy measures, however, it is important to verify that changes in the proposed measure truly reflect changes in the health characteristic to be monitored.
Another approach is to use formal statistical methods designed for small areas. These are not yet commonly used in public health assessment but are discussed below because they warrant further development.
Because of the relative lack of survey data at the community level, many of the indicators proposed in Appendix A are derived from administrative records. Administrative data arise from the day-to-day management of a system such as a health care delivery organization, and they usually arise from the records needed to provide appropriate services to individual patients or clients. They frequently come from encounters with health care or service providers, but administrative data rarely include health status measures for any defined population. The administrative data cited in the prototype indicator sets include records from managed care organizations and other health care delivery systems. Administrative records from a variety of public and private organizations (e.g., local welfare agencies and private employers) can also provide valuable data for performance monitoring in CHIPs. Examples include hospital discharge data, including diagnoses, procedures completed, and perhaps even outcomes; public assistance records on immunization and other factors for covered children; and employment records that include health-related data.
With the widespread and increasing use of computerized record systems to manage service delivery in health care, government agencies, and private companies, the growth in the availability of administrative records can fill an important data gap at the community level. Administrative records can be more timely and less costly than special-purpose statistical data systems. On the other hand, administrative records usually relate to services provided to certain individuals, not to the overall need for services or to the health status of the entire population of a community (Hoaglin et al., 1982).
Use of administrative data may also be complicated by the
lack of an appropriate denominator. Indemnity insurers, for instance, often know only the number of "covered lives" in their plan and nothing about the characteristics of that population. With data of this sort, crude per capita rates are the only possible measures; determining the proportions of people in certain demographic groups or with certain health needs is not possible. With such data it is not possible, for instance, to assess the proportion of women age 50 and over who have had mammograms in the past two years.
For example, consider the following indicators from Healthy People 2000:
Increase to at least 75 percent the proportion of adults who have had their blood cholesterol checked within the preceding 5 years (Objective 15.14).
Reduce the prevalence of blood cholesterol levels of 240 mg/dL or greater to no more than 20 percent among adults (Objective 15.7).
On the national level, these proportions can be measured accurately through a population-based survey, the National Health and Nutrition Examination Survey (NHANES). On the local level, a CHIP might try to gather such data from health care records or perhaps even from records of employee screening programs.
With regard to the first of these two objectives, one is likely to find that data on the percentage of health plan members who have had their cholesterol checked is available only from managed care organizations, and probably only from those with good cholesterol screening programs. Thus, the data likely would be biased upward. The second of these two objectives can be calculated only for those whose serum cholesterol has been tested. Since this may not be a representative group, the proportion with high cholesterol levels may be greater than in the general population. Trends in these measures can yield information on the performance of the health plans that are covered, but only when interpreted with caution. An increase in the proportion of people screened for cholesterol would indicate a positive performance, as long as the population base did not change because of the addition of people more likely to have been screened for reasons unconnected to the plan. An increase in the proportion of those tested with high levels would be a negative result, unless it was the result of screening a large number of new, high-risk plan members.
Standardization methods are used to account for demographic changes in a single population over time. For instance, if there were no changes in the age-specific cancer rates between 1987 and 2000, aging of the population alone would cause the overall death rate to increase from 195.9 to 217.1 per 100,000, given the Census Bureau's median population projection for the United States. It is important to understand this sort of pattern when vital statistics are used as performance measures: an increase to only 200.0 per 100,000 would actually be an advance (Stoto, 1992a).
Standardization also serves a second, very different purpose, because some of the differences that exist between communities reflect differences in population composition rather than differences in underlying rates. Communities differ in the age, race, and sex composition of their population, so communities with the same age-, race-, and sex-specific death rates will have different crude death rates, both overall and cause specific. In setting benchmarks for performance measures, communities should look at the national target set in Healthy People 2000 or some other source, as well as current rates of other communities. This comparison makes sense only if differences in the composition of the national and community populations are "removed." If all communities are adjusted to the same standard population, standardization provides a bridge from the national targets to state and local benchmarks.
For some purposes, however, standardization could lead to difficulties. Some CHIPs will want to consider setting priorities among health issues. Many factors go into such choices, but the current level of mortality associated with a disease or other health problem is a major one. Standardized rates present a different impression about the relative importance of various causes of death than unstandardized rates. For example, unintentional injuries have a somewhat higher mortality rate than cerebrovascular diseases when adjusted to the 1940 population (35.0 versus 29.7 per 100,000), but the crude cerebrovascular mortality rate is more than 50 percent higher than the crude unintentional injury mortality rate (61.2 versus 39.5 per 100,000).
The choice of a standard can make a substantial difference. Compare, for example, the overall cancer death rate standardized to the 1940 and the estimated 1990 populations. The greatest difference is in the level of the rates: the 1987 rate is fully 50
percent higher (199.9 compared to 132.9 per 100,000) when the 1990 population, rather than the 1940 population, is chosen as the standard. The choice of standard affects trends as well. With the 1990 standard, the cancer death rate increased by 6.2 percent between 1970 and 1987; with the 1940 standard, it increased by only 2.3 percent. Neither of these standards is ''correct" in any absolute sense, but it is important to note that they are different. Whatever decision is made about adjustment and choice of standard, it is important that the decision be applied consistently to all of the mortality objectives.
In some cases, the examination of age-specific rates should not be avoided. If rates are to be standardized, many statisticians favor using the 1940 U.S. population as a standard, primarily because it would be consistent with the long-term practice of the National Center for Health Statistics and others in reporting mortality rates (Curtin, 1992). Using this standard would facilitate the efforts of states trying to monitor their own progress on the objectives. Others argue against adjusting, especially to the 1940 population, because it masks the public health impact of the levels seen in crude death rates. One compromise would be to standardize the rates to a more recent population, such as the U.S. population in 1990. This would give a better picture of the current public health impact of various diseases (as measured by the relative numbers of deaths) and would provide the analytic benefits of age adjustment. The difficulty with using a new standard is that special calculations would be needed to adjust past data for trend analyses.
STATISTICAL MODELS FOR SMALL AREAS
For measures that are highly variable at the state or local level, numerator data for three, five, or more years can be aggregated into one or a running series of calculated rates. Such measures are slower to show the impact of interventions because they include data from past years, but they may be stable enough to show meaningful trends. When rates are changing over time, aggregated rates will not be comparable unless all of the rates are based on the same number of years. Thus, standards are needed to judge whether the variability of rates and measures is sufficiently small for tracking purposes and to ensure that the results are comparable within states and the nation.
Kalton (1991) has proposed four statistical models for small area estimation that have potential for public health assessment.
"Synthetic estimation" uses information on the age, sex, and race distribution within a small area in combination with national race-, age-, and sex-specific rates of the outcome in question to estimate prevalence in the small area. Elston and colleagues (1991), for instance, have applied this approach to estimate the number of functionally dependent individuals for states and counties. Spasoff and colleagues (1996) have found, however, that synthetic estimates did not agree with estimates obtained through a community health survey in the same small area. "Regression estimation" uses information from a sample of small areas with complete data on a continuous outcome variable—the maternal mortality rate, for example—and other generally available predictor variables to estimate a regression equation and then uses these results to calculate predicted values of the maternal mortality rate in other communities for which the predictor variables are available. "Structure preserving estimation'' techniques use the methods of discrete data analysis, such as iterative proportional fitting, to combine survey-based information on the age and sex structure for an outcome such as disability with census information on the number of individuals in a community to estimate the prevalence of disability in a small community. "Composite estimation" combines information from the community in question (which might have a high degree of variability, depending on the size of the population) with a model-based estimate, such as those described above, according to an empirical Bayes model. Manton and colleagues (1989), for instance, describe the use of such a model to stabilize cancer mortality rates for counties in the United States. Malec and colleagues (1993) have developed a similar method for use with binary variables in the National Health Interview Survey.
As Kalton (1991) points out, all of these approaches depend on a statistical model, so the choice of a good model and effective auxiliary variables is important. Unless the auxiliary variables are strongly related to the outcome variable in question, the small area estimates will vary little from one area to another. In practice, the choice of the model and auxiliary variables is limited by the data available. Thus, although these approaches may be useful for health planners in predicting health care needs, they will be helpful for public health assessment purposes only if auxiliary variables are available to accurately reflect changes over time and local differences from national levels.
Adams, P.F., and Marano, M.A. 1995. Current Estimates from the National Health Interview Survey, 1994. Vital and Health Statistics, Ser. 10, No. 193. PHS 96–1521. Hyattsville, Md.: National Center for Health Statistics.
Amler, R.W., and Dull, H.B., eds. 1987. Closing the Gap: The Burden of Unnecessary Illness. New York: Oxford University Press.
Brown, C.C., and Kessler, L.G. 1988. Projections of Lung Cancer Mortality in the United States: 1985–2025. Journal of the National Cancer Institute 80:43–51.
Curtin, L.R. 1992. A Short History of Standardization for Vital Events. In Reconsidering Age Adjustment Procedures: Workshop Proceedings. M. Feinleib and A.O. Zarate, eds. Hyattsville, Md.: U.S. Department of Health and Human Services, National Center for Health Statistics.
Elston, J.M., Koch, G.G., and Weissert, W.G. 1991. Regression-Adjusted Small Area Estimates of Functional Dependency in the Noninstitutionalized American Population Age 65 and Over. American Journal of Public Health 81:335–343.
Hahn, R.A., Teutsch, S.M., Rothenberg, R.B., and Marks, J.S. 1990. Excess Deaths from Nine Chronic Diseases in the United States, 1986. Journal of the American Medical Association 264:2654–2659.
Hoaglin, D.C., Light, R.L., McPeek, B., Mosteller, F., and Stoto, M.A. 1982. Data for Decisions: Information Strategies for Policymakers . Cambridge, Mass.: Abt Books.
IOM (Institute of Medicine). 1993. Access to Health Care in America . M. Millman, ed. Washington, D.C.: National Academy Press.
Kalton, G. 1991. Methods of Small Area Estimation: A Review. In Proceedings of Consensus Conference on Small Area Analysis. DHHS Pub. No. HRS-A-PE 91-1(A). Washington, D.C.: Health Resources and Services Administration.
Levin, D.L., Gail, M.H., Kessler, L.G., and Eddy, D.M. 1986. A Model for Projecting Cancer Incidence and Mortality in the Presence of Prevention, Screening, and Treatment Programs. In Cancer Control Objectives for the Nation, 1985–2000. NCI Monograph #2. Bethesda, Md.: National Cancer Institute.
Malec, D., Sedransk, J., and Tompkins, L. 1993. Bayesian Predictive Inference for Small Areas for Binary Variables in the National Health Interview Survey. In Case Studies in Bayesian Statistics. C. Gatsonis, J.S. Hodges, R.E. Kass, and N.D. Singpurwalla, eds. New York: Springer-Verlag.
Manton, K.G., Woodbury, M.A., Stallard, E., Riggan, W.B., Creason, J.P., and Pellom, A.C. 1989. Empirical Bayes Procedures for Stabilizing Maps of U.S. Cancer Mortality Rates. Journal of the American Statistical Association 84:637–650.
NCQA (National Committee for Quality Assurance). 1993. Health Plan Employer Data and Information Set and User's Manual, Version 2.0 (HEDIS 2.0). Washington, D.C.: NCQA.
Public Health Foundation. 1990. A Report on the States' Ability to Measure Progress Towards the Year 2000 Objectives. Washington, D.C.: Public Health Foundation.
Siegel, P.A., Frazier, E.L., Mariolis, P., Brackbill, R.M., Smith, C., and State Coordinators for the Behavioral Risk Factor Surveillance System. 1993. Behavioral Risk Factor Surveillance, Summary of Data for 1991: Monitoring Progress Toward the Nation's Year 2000 Health Objectives. Morbidity and Mortality Weekly Report 42(SS-4):1–21.
Sofaer, S. 1995. Performance Indicators: A Commentary from the Perspective of an Expanded View of Health. Washington, D.C.: Center for the Advancement of Health.
Spasoff, R.A., Strike, C.J., Nair, R.C., Dunkley, G.C., and Boulet, J.R. 1996. Small Group Estimation for Public Health. Canadian Journal of Public Health 87(2):130–134.
Stoto, M.A. 1989. Statistical Issues in Formulating the Health Objectives for the Year 2000. In Proceedings of the 1989 Public Health Conference on Records and Statistics. Washington, D.C.: National Center for Health Statistics.
Stoto, M.A. 1992a. Age Adjustment for the Year 2000 Health Objectives. In Reconsidering Age Adjustment Procedures: Workshop Proceedings. M. Feinleib and A.O. Zarate, eds. Hyattsville, Md.: U.S. Department of Health and Human Services, National Center for Health Statistics.
Stoto, M.A. 1992b. Public Health Assessment the 1990s. Annual Review of Public Health 11:319–334.
USDHHS (U.S. Department of Health and Human Services). 1991. Healthy People 2000: National Health Promotion and Disease Prevention Objectives . DHHS Pub. No. (PHS) 91-50212. Washington, D.C.: Office of the Assistant Secretary for Health.
Weinstein, M.C., Coxson, P.G., Williams, L.W., Pass, T.M., Stason, W.B., and Goldman, L. 1987. Forecasting Coronary Heart Disease Incidence, Mortality, and Cost: The Coronary Heart Disease Policy Model. American Journal of Public Health 77:1417–1426.
Wilson, R.W., and Drury, T.F. 1984. Interpreting Trends in Illness and Disability: Health Statistics and Health Status. Annual Review of Public Health 5:83–106.
Woolsey, T.D. 1981. Toward an Index of Preventable Mortality. Vital and Health Statistics, Ser. 2, No. 85. Washington, D.C.: U.S. Government Printing Office.