Performance Measurement Considerations for Publicly Funded Health Programs
As discussed in the preceding chapter, performance measurement is a prominent feature of current policy and management approaches. This panel's efforts have been focused on performance measurement in the broad array of health-related programs supported in some measure by public funding. In its first report, the panel addressed primarily the federal-state funding relationship for the specific set of programs included in the performance partnership grant (PPG) proposal (see Chapter 1). In this report, the discussion has been expanded so that many of the concerns addressed are relevant for performance measurement more generally, not just in the context of a federal-state funding relationship. The previous chapter explored how performance measurement is being used in some of these other settings (e.g., for federal agencies reporting to Congress, for state agencies reporting to their legislatures, and for health plans in the private sector).
The discussion here has also been broadened to look beyond the specific program areas covered by the panel's first report. Each program area poses unique performance measurement challenges. These challenges reflect factors such as the nature of the services and program activities being undertaken, the extent to which evidence of effectiveness is available to guide program activities, and the degree of consensus on appropriate measures to be used. For example, the control of food-borne disease outbreaks generally requires much more rapid response than do steps to control cancer, and such differences should be reflected in the performance measures used.
Although program-specific issues must be considered, the panel emphasizes that a strictly programmatic perspective may discourage a more comprehensive approach to performance measurement that can capitalize on the complementary,
overlapping, and even synergistic interactions among programs and their information system needs. Thus, the panel has attempted to consider a mix of specific and cross-cutting issues. This chapter reviews several of these issues, including the broad array of health-related services and service relationships, measurement considerations for population-based health services, special considerations in specific health program areas, and the importance of using process guidelines as a basis for performance measurement.
Broad Array of Health-Related Services and Service Relationships
The panel's work has focused on performance measurement for health-related programs for which public funding is provided and for which performance-based accountability is sought to foster effective use of resources. Yet the programs for which this accountability is sought cannot be viewed only in terms of a particular funding arrangement. The stated goals and objectives of a health program, not the source of funding, should be the primary focus of performance evaluation. Both the breadth and the limitations of the services and service relationships involved are important because those elements must be accommodated in the performance measurement systems that are developed. Four key considerations are noted here.
First, most health program areas that receive public funding are influenced by a diverse array of factors. Funds may come from federal, state, local, and private sources and from different service categories within those sources. As the panel emphasized in its first report, this means that program outcomes must generally be viewed as the collective product of all these contributions and can rarely be credited to a single funding source. Thus, even though a single funder, such as a federal block grant program, may establish a requirement for performance measurement, the development of measures and related data resources requires consideration of the full scope of influences on the program being assessed.
Second, the focus on publicly funded program areas means the panel concentrated on performance measurement and data system issues of concern to public agencies at the federal, state, and local levels. Relevant work in the private sector, such as the health care performance measurement activities discussed in Chapter 2, should be taken into consideration, but the panel has not attempted to formulate recommendations regarding those activities.
Third, implementation of publicly funded health programs involves not only public-sector health agencies, but also agencies with other responsibilities (e.g., education, criminal justice, housing, transportation) and organizations and individuals in the private sector, such as hospitals, health plans, individual clinicians, and employers. This means that the planning and implementation of performance measurement for health programs should take a broad view of the stakeholders involved.
Finally, the health-related programs the panel has considered provide a variety of services, ranging from clinical care for illness to population-based services aimed at health promotion and disease prevention. Mental health programs, for example, typically emphasize clinical care to treat individual patients, whereas chronic disease programs are more likely to offer screening services, such as cholesterol testing for an entire community. A variety of programs may make use of public education aimed at the community at large. Environmental services, such as water treatment and restaurant inspections, are essential for protecting the health of all members of the community, but are not delivered directly to individuals. The mix of such services varies widely among programs. Using expenditures as a measure, the Public Health Foundation (Eilbert et al., 1996) found that mental health agencies devote almost all of their resources to personal health care services, whereas environmental agencies, which have major nonhealth responsibilities, support primarily population-based health services. Performance measurement efforts should have links to all these kinds of services and consider their differing data collection needs and data resources.
Measurement Considerations for Population-Based Health Services
In developing performance measurement systems for the broad range of publicly funded health programs, certain factors will be relevant to most, if not all, programs. For example, all programs should be monitored using a mix of capacity, process, and outcome measures, and those measures should be as valid, reliable, and responsive to the changes they are expected to monitor as possible. Further, these measurement systems must respond to changing health needs, measurement tools, and program resources. Within a state or community, several programs may benefit by coordinating both their services and their information systems.
Population Health Services
Many publicly funded health programs fall within the realm of "public health," a designation based not on the source of funding or on the specific content of the services but on the population-based approach used to plan and provide those services. The defining features of public health are its emphasis on protecting and improving the health of the general public through prevention-oriented population-based services, and its role in ensuring that key services reach individuals at risk. Examples include programs to provide clean water, immunizations for children, and adequate prenatal care to the disadvantaged. Because the health of the public, or of specific groups in the population, is influenced by a mix of factors, some of which are beyond the control of the individuals involved, public health programs often depend on collective action by various institutions
of society to achieve the full potential of health in a community. For purposes of performance measurement, public health activities with population-wide goals and objectives should be recognized as having goals distinct from those for personal health services, which focus on care for individuals.
Population health services are based on a public health perspective that focuses on an assessment of the overall health needs of the population. Some services (e.g., water treatment, public education programs, tobacco control) are provided to the population at large; others are delivered directly to individuals (e.g., immunizations, family planning, screening services) as a way to improve both individual health and the overall health status of the population. In contrast, personal health services are based on a clinical perspective that focuses on the care sought by an individual (e.g., diagnosis of disease, a surgical procedure, counseling services for participants in a substance abuse treatment program). Collectively, these personal health services for individuals contribute to better health for the population as a whole, but personal health services are not specifically intended to meet population health goals.
Population health services have important interrelationships with personal health services. Both may play a role in providing services for primary prevention or responding to certain health problems, and the benefits may be realized by specific individuals and the population in which they live. For example, the timely diagnosis and successful treatment of a case of infectious tuberculosis cures the individual and prevents the infection from spreading in the population. Moreover, diagnosis of an individual case of tuberculosis can trigger a systematic screening of population groups that may be at increased risk for infection. Other examples of this synergy can be found in the benefits for both individuals and society of successful treatment of those who abuse alcohol and other drugs, which can help reduce problems such as vehicle-related injuries caused by drunk driving, domestic violence, and crime related to illegal drugs. Such interconnections between personal and population health services are reflected in the recently renewed appreciation of the value of collaboration between the domains of medical care and public health (Lasker et al., 1997).
The distinctions between population and personal health services have implications for performance measurement and monitoring. For population health services, health outcomes and risk status are measured by overall changes for a population (or subgroup) as a whole. For personal health services, interventions must be monitored on the basis of the response of those individuals who received the services.
Monitoring Population Health Services
Many public health agencies at the local, state, and federal levels have an established foundation of ongoing collection of health-related data (e.g., vital records, infectious disease reporting, cancer registries, surveys on health status
and risk behaviors, hospital discharge reports) to inform programs, policy makers, and the public about the health status of the population and the effectiveness of health programs. In general, these data systems are oriented to producing information about the health of the population rather than to tracking the health of specific individuals. These activities, often referred to collectively as public health surveillance, are a key component of public health services. Public health surveillance has been defined as
the ongoing systematic collection, analysis, and interpretation of health data essential to the planning, implementation, and evaluation of public health practice, closely integrated with the timely dissemination of these data to those who have a need to know. The final link in the surveillance chain is the application of these data to prevention and control. A surveillance system includes a functional capacity for data collection, analysis, and dissemination linked to public health programs (Thacker and Berkelman, 1988:164).
Plans for performance measurement in publicly funded health programs should build on the surveillance systems already in place. They are a primary source of data for performance measures used to assess programmatic activities, especially measures of health outcomes and risk status. Other data collection activities may be needed to produce data for measures of program processes and capacity.
Those population health services that are generally regarded as highly successful in the prevention of adverse health outcomes may present special challenges for performance measurement. The protective effects of such services are often taken for granted, but their failure has the potential for widespread and serious consequences in the population. For example, a major outbreak of cryptosporidiosis in Milwaukee in 1993 occurred when essential water treatment systems broke down. For these services, outcome measures such as the incidence of disease are most informative only when some aspect of the system fails, not when it is functioning properly. Therefore performance measurement, in addition to monitoring health outcomes such as disease incidence, should focus on the steps taken to protect against such failures. These protective steps are generally best represented by capacity and process measures, such as water chlorination levels and numbers of inspections, that provide indications of appropriate risk reduction practices.
Some population health services exert an indirect influence on health or contribute to positive outcomes for future generations. They may act through the collective action of community groups (for examples, see Institute of Medicine, 1996a), and they may require sustained efforts to achieve the desired outcomes. The performance measures to be used to monitor such services must be selected carefully. As discussed earlier, the short time horizon usually adopted for performance monitoring—a period of from 3 to 5 years was proposed for PPGs—may dictate the use of intermediate outcomes because the longer-term outcomes can-
not be observed within the specified time frame. Process and capacity measures might be designed to assess the collaboration and continuity needed to achieve those longer-term outcomes.
An additional concern is ensuring that performance measurement promotes, or does not hinder, the ''equity" of population health services. The use of measures that focus only on the total population can obscure problems among high-risk populations, which might be defined by geography, race or ethnicity, or risk-related characteristics. Program goals and the associated performance measures should be framed in a way that gives attention to all relevant populations, and data collection, particularly through surveys, must be designed to produce statistically meaningful performance measurement results for those population groups.
Monitoring the Infrastructure for Publicly Funded Health Programs
The Future of Public Health (Institute of Medicine, 1988) describes the core functions of public health agencies at all levels of government as assessment of community (or population-wide) health status and health needs, policy development to protect and promote the health of the public, and assurance that services necessary to achieve health goals are provided. In recent years, public health practitioners have identified a set of 10 "essential services" that describe how the three core functions of public health are carried out (see Box 3-1).1
These core functions and essential services might be considered part of the infrastructure that supports all publicly funded health programs. Performance measurement itself is readily encompassed as a responsibility of health agencies through both the assessment and assurance functions and at least two of the essential services: monitor health status to identify community health problems, and evaluate effectiveness, accessibility, and quality of personal and population-based health services.
The panel encourages states and communities to consider using performance measurement to monitor not only the programmatic aspects of public health services—immunization programs, water treatment, and maternal and child health programs, for instance—but also the infrastructure for these health programs as represented by activities related to the core functions and essential services.
For example, monitoring the accuracy and completeness of surveillance services is important because insufficient or poorly conducted surveillance may
Box 3-1 Essential Public Health Services
miss health problems or provide a misleading picture of health status, especially in comparison with results based on good surveillance data. In the short term, a community or population that is served by inadequate surveillance may inaccurately be perceived as "healthier" than a community with a more comprehensive surveillance system that detects more cases of illness. This effect was seen when a state that quickly and accurately collected, analyzed, and interpreted data on salmonella infections detected a major outbreak, producing the impression that the population was less healthy than those of other states (Van Beneden et al., 1996). Further investigation showed, however, that similar outbreaks had gone undetected in other states with less proficient surveillance systems, making their populations appear healthier than they were.
Although the panel supports implementing performance measurement to monitor public health services, additional groundwork will be needed to reach agreement on an approach to measurement and to identify suitable measures and data sources. Furthermore, research and evaluation remain necessary to determine the impact on health outcomes of the performance of activities related to the core functions and essential services of public health.
Until a clearer consensus emerges regarding such measures, states and communities may wish to refer to some of the tools that have been developed to help
local health departments assess their ability to perform the core public health functions and deliver associated services (e.g. National Association of County Health Officials, 1991; National Civic League, 1993; Centers for Disease Control and Prevention, 1995). Other work to develop formal measures of effective local health department performance may also be informative (Miller et al., 1994a,b; Turnock et al., 1994a,b, 1995; Richards et al., 1995). Proposals for objectives for public health infrastructure for Healthy People 2010 (Office of Disease Prevention and Health Promotion, 1997) may also suggest performance measures for this purpose. Additional guidance for such performance measures should be expected from efforts to establish a national accreditation program for local health departments and to develop national public health performance standards (Halverson et al., 1998). This work is being done collaboratively by the Centers for Disease Control and Prevention (CDC), the Association of State and Territorial Health Officials, the National Association of County and City Health Officials, the National Association of Local Boards of Health, the Public Health Foundation, and the American Public Health Association.
Some Performance Measurement Considerations Related to Program-Specific Matters
Many health program areas have distinctive features that will need to be taken into account as performance measurement systems are developed. Some of these features for selected program areas are reviewed in this section.
Environmental Health Programs
Environmental health services, such as ensuring clean air, safe water, and protection from other toxic exposures, are a classic component of public health programs, but they pose difficult challenges for performance measurement. Environmental threats to health and the services designed to control those threats are diverse. For example, environmental health services must address both the low-level air and water contamination that can exacerbate conditions such as asthma and increase the longer-term risks for cancers, chronic respiratory illness, and other adverse health outcomes. They must also address the sudden high-level toxic exposures that produce acute health effects requiring immediate medical attention. The primary goal is to prevent both types of exposure and to ensure that if either should occur, an effective means of responding will be available. Environmental health outcomes of specific exposures reflect interactions among personal susceptibility, other hazards in the environment (synergistic effects), the biologically effective doses of the hazards, and the mitigating effects of protective measures that may operate by affecting any of these elements (National Research Council, 1994).
Efforts to monitor environmental health risks and steps taken to control them
therefore require a mix of information on the hazards (e.g., specific air or water pollutants), the exposures (e.g., biological markers, such as blood lead levels), and health outcomes (e.g., asthma, birth defects, and cancer) (see, e.g., Thacker et al., 1996). Such monitoring activities can be challenging because of the variety of potential hazards, the range of settings in which exposures can occur (e.g., home, workplace, community at large), the differing sources of hazardous exposure (e.g., multiple small sources, such as vehicle emissions or pesticide runoff in surface water, versus point sources, such as smokestack emissions or wastewater discharged from a manufacturing plant), and the interaction between private-sector business interests and public-sector regulation. Moreover, if health effects occur long after an exposure or when an exposure is not perceived, health care providers treating individual patients may not be able to identify the link between the current health problem and the past exposure. Growing interest in environmental equity and environmental justice have also drawn attention to the need to address the disproportionate exposure to environmental hazards (e.g., hazardous waste sites, hazardous manufacturing processes) faced by certain communities, neighborhoods, and other population groups.
The public health community is working to develop better ways of addressing these environmental health challenges. For example, the National Association of County and City Health Officials (1997), with support from the National Center for Environmental Health at CDC, is working to develop methods for use by communities in assessing environmental health conditions. Environmental health data issues have also been the focus of various workshops (e.g., National Center for Environmental Health, 1996; Public Health Foundation, 1997).
This work has highlighted several problem areas that are of particular concern for performance measurement efforts. Scientific evidence on the links between environmental exposures and health outcomes is limited. Even where a risk exists and these links are understood, available data may not be adequate to assess the exposures of individuals or population groups. Data are collected by a variety of federal, state, and local agencies and in the private sector by individual companies, but lack of coordination in these data collection activities can lead to redundancies and to inconsistencies across data systems. It can also be difficult to identify and gain access to the data, and proprietary data may never come to public attention. A crucial factor for performance measurement is a lack of consensus on appropriate indicators of environmental health status or of capacity and processes in environmental health services.
The Environmental Protection Agency (EPA) is using performance measurement in its performance partnerships with states and has undertaken several initiatives aimed at strengthening environmental data more generally (see Environmental Protection Agency, 1998a,b). EPA notes that these collaborative efforts with states and industry include a reassessment and reorganization of reporting requirements to reduce unnecessary reporting and achieve better coordination, the development of data standards, and the modernization of informa-
tion systems. Because of the diversity of interests served by environmental data, the panel emphasizes the need for federal, state, and local health agencies to ensure that they are represented in these EPA discussions so that health data requirements and concerns receive appropriate attention.
Mental Health Programs
Until the mid-1960s, most publicly funded mental health services were provided in institutional settings. Frequently, individuals with serious and persistent mental illness (e.g., schizophrenia and bipolar disorder) were treated in state institutions for many years—sometimes for their lifetimes. The Community Mental Health Center Act of 1963 promoted the development of community-based programs as an alternative to institutional care. New treatment strategies and the relatively recent development of more sophisticated medications have also made it possible for increasing numbers of people who were previously hospitalized to be cared for in community-based settings.
With a greater proportion of publicly funded mental health services being delivered in nonhospital settings and the recent trend toward managed care contracting, the nature of the service delivery system is changing. Increasingly, these publicly funded services are being provided by either for-profit or not-for-profit contractors. Such changes in the past few years have increased the importance of using performance contracts and outcome measures to respond to a demand for higher levels of accountability and the importance of having surveillance systems in place to monitor the impact of those changes. There is also an increased demand that such measures address the outcomes of interest to consumers and their families.
Until recently, there has been little consensus on how performance should be assessed. States, provider organizations, and accrediting bodies have undertaken separate efforts to develop their own standards and measures. Among the groups developing performance evaluation systems are the American College of Mental Health Administrators, the American Managed Behavioral Healthcare Association, the Joint Commission on Accreditation of Healthcare Organizations (JCAHO), the National Committee for Quality Assurance (NCQA), the National Alliance for the Mentally Ill (NAMI), and the National Association of State Mental Health Program Directors (NASMHPD). At the federal level, the Center for Mental Health Services of the Substance Abuse and Mental Health Services Administration (SAMHSA) and the Health Care Financing Administration (HCFA) are also actively involved. The health outcome and risk status measures proposed in the panel's first report are listed in Appendix A.
All of these efforts have helped advance performance evaluation, but they have not resulted in the standard measurement system that was envisioned for the PPG proposal. Currently, almost every state mental health agency uses some set of measures to evaluate the impact of mental health services. Most states have
developed their performance evaluation packages by means of a local consensus-building process that has included consumers, advocates, and providers. The result typically is a system that is customized for the state, but with little similarity to the performance evaluation systems used in other states.
Increasingly, the emphasis is on the development of a common outcome-oriented framework for the evaluation of mental health programs (e.g., MHSIP Task force on a Consumer-Oriented Mental Health Report Card, 1996; Institute of Medicine, 1997b; Smith et al., 1997; National Association of State Mental Health Program Directors, 1998). The Mental Health Statistics Improvement Program (MHSIP) Consumer-Oriented Mental Health Report Card (MHSIP Task force on a Consumer-Oriented Mental Health Report Card, 1996) was one of the initial efforts to develop a measurement framework that incorporates outcomes of mental health services and specifically considers consumer concerns. An evolving consensus is reflected in the December 1997 adoption by NASMHPD members of a standardized performance indicator framework for the evaluation of public mental health services (National Association of State Mental Health Program Directors, 1998). The National Association of State Mental Health Program Directors Research Institute (1998b) is including indicators from the NASMHPD framework in the performance measurement system it is developing for use by state psychiatric hospitals to fulfill their accreditation requirements under the JCAHO (1998) Oryx program (see Chapter 2 for discussion of the Oryx program).
The NASMHPD framework draws from other efforts, such as the MHSIP Report Card and the work of the American College of Mental Health Administrators, NAMI, and NCQA. The framework was also influenced by a collaborative review of measures used in managed care settings conducted by SAMHSA; HCFA; and state mental health, substance abuse, and Medicaid programs. The five performance domains of the NASMHPD framework (and examples of the associated indicators) include access (enrollment rate, utilization rate), quality/appropriateness (consumer participation in treatment planning, contact within 7 days following hospital discharge), outcome (school improvement, symptom relief), structure/plan management (consumer/family member involvement in policy development), and early intervention/prevention (substance abuse screening, use of self-help/self-management).2 Formal specifications for performance measures to operationalize these indicators are being prepared. The domains of the MHSIP Report Card are similar, but exclude structure/plan management.
The panel believes that further advances in performance-based accountability for public mental health systems will depend on four factors. First, consensus must be reached among stakeholders (e.g., consumers; clinicians; health plans;
program managers; and federal, state, and local mental health agencies) on the domains that should be addressed by the performance measurement process. The adoption of the NASMHPD framework by the state mental health directors is an important advance.
Second, agreement must be reached on the concerns to be addressed in each domain and on the specific measures to be used to represent those concerns. The specification of outcome measures, including ones that reflect the consumer's perspective, is considered a particularly high priority. Examples of the indicators proposed in the NASMHPD (1998) framework to reflect outcome-related concerns are symptom relief, employment or school status, consumer perception of outcomes, and living situation.
Work is also needed on process and capacity measures for assessing access to care and the appropriateness of care. Such measures have been widely used, but have generally not been selected on the basis of evidence that they are linked to desired outcomes—a key requirement that the panel emphasized in its first report. This disconnect between outcomes and other measures points to the panel's third concern: research and program evaluation studies are needed to build a stronger evidence base linking mental health outcomes to specific aspects of process and capacity.
The fourth area that will require attention if performance measures are to be used successfully is the further development of agreed-upon data collection tools and procedures and their integration into existing mental health program information systems. A recent study to test the ability of five states to use many of the measures identified in the MHSIP Report Card and the NASMHPD framework—the Five-State Feasibility Study—found that fewer than half of the 28 measures tested could be reported by all five states and that differing definitions frequently limit the comparability of apparently similar measures (National Association of State Mental Health Program Directors Research Institute, 1998a). A subsequent study will build on this work with a group of 10 states. The performance measurement requirement for states' JCAHO-accredited hospitals is also likely to help stimulate further development of these measurement tools and information systems.
In contrast to the emphasis on population-based preventive services in many public health programs, mental health programs are focused almost entirely on services for persons with mental disorders. In some areas, however, a population perspective is useful. Assessments of the overall prevalence and incidence of mental disorders in the general population could help mental health programs gauge the potential need for individual or community-based services. Evaluation of preventive interventions and interventions outside the treatment setting (e.g., reducing the number of people with serious mental illness in jails and prisons) will also require the use of population-based measures. Suitable measures and data collection instruments will have to be developed and tested. The Epidemiologic Catchment Area Study (Bourdon et al., 1992) and the National Comorbidity
Survey (Kessler et al., 1994) have provided data for national prevalence estimates, but these studies were not designed to produce data on an ongoing basis or for states and communities, which performance measurement will require. CDC is working with several researchers to explore whether questions on the annual, state-based Behavioral Risk Factor Surveys can be used to obtain valid assessments of the mental health status of the population (Centers for Disease Control and Prevention, 1998).
In summary, the growing demand for performance-based evaluations of publicly funded mental health programs is creating an urgent need to develop greater consensus on the overall framework for these evaluations. This framework should define the domains for assessment (e.g., outcomes, appropriateness of and access to services), indicators that identify the critical concerns in each domain, and an array of specific measures and measurement tools (e.g., specific instruments for client assessment). As in other program areas, users should have the flexibility to select instruments and measures that best match program goals and strategies.
Substance Abuse Programs
Prevention and treatment of substance abuse are a high priority at the national, state, and local levels. Substance abuse includes use of illegal drugs, as well as inappropriate use of legal products such as alcohol and prescription medications.3 Substance abuse demands attention from a health perspective not only because it produces serious and difficult-to-treat physical and psychological effects, but also because it substantially increases the risk of other health problems, such as injury, adverse pregnancy outcomes, tuberculosis, HIV infection, and sexually transmitted diseases. In contrast to many other health problems, substance abuse is also an important criminal justice issue because use of many abused substances is illegal and because substance abuse tends to generate other criminal activity that adversely affects the general population.
Substance abuse programs have a clear stake both in population-based activities such as health education aimed at prevention and in personal health services needed for treatment of substance abuse and the other health problems it generates. Treatment may be supplemented by wraparound services that help people function more effectively in the community (e.g., transportation, housing, job placement).
The implication for performance measurement is that substance abuse programs might be expected to address a variety of outcomes, ranging from the
The panel's work on substance abuse in the first phase of the study and the discussion in this section of the current report focus on program activities related to drug and alcohol abuse. In some contexts, smoking and other forms of tobacco use may be viewed as substance abuse issues. The panel's first report addressed tobacco use in the context of chronic disease prevention. See Appendix A for the tobacco-related risk status measures proposed in that report.
impact of treatment on the social functioning of treated individuals to the prevalence of substance abuse in specific population groups (e.g., adolescents) that are often the focus of prevention efforts. As noted in the panel's first report (National Research Council, 1997), however, few potential substance abuse performance indicators are measured in exactly the same way by all states or other jurisdictions. The health outcome and risk status measures proposed for substance abuse programs in the panel's first report are listed in Appendix A. There is general agreement on the content areas of greatest interest (see Box 3-2), but there is substantial variation in the program strategies adopted and the characteristics of the populations served in specific settings (e.g., public versus private, managed care versus fee-for-service).
Several activities are under way that, over time, are expected to contribute to greater consistency in the measures used. In October 1997, a workgroup organized by the National Association of State Alcohol and Drug Abuse Directors (NASADAD) proposed the adoption of a performance measurement framework based on the domains of efficiency, effectiveness, and structure (see National Association of State Alcohol and Drug Abuse Directors, 1998). Indicator areas and possible measures or data sources have been proposed for each domain. The proposed indicators for effectiveness are physical and mental health status, economic self-sufficiency, social supports and functioning, and alcohol and other drug use. The proposed indicators for efficiency are access, treatment retention, costs of services, and appropriateness of services. For structure, the proposed indicators are service capacity, data capabilities, workforce competence, and
Box 3-2 Performance Measurement Domains for Substance Abuse Identified in Phase I PPG Process
SOURCE: National Research Council (1997).
client characteristics. NASADAD is coordinating a discussion among its full membership to refine this proposal and achieve consensus on the various components. As the process moves forward, detailed specifications will be developed for individual measures. NASADAD is working closely with SAMHSA in this activity and is consulting with NASMHPD in areas of common interest.
SAMHSA's Center for Substance Abuse Treatment (CSAT) is involved in various activities related to performance measurement. For example, the Treatment Outcomes and Performance Pilot Studies project has funded 14 states to test methods of monitoring the performance of publicly funded substance abuse treatment services. Another priority is the development of measures that can be used in managed care settings. Currently, the Health Plan Employer Data and Information Set (HEDIS) (National Committee for Quality Assurance, 1997) is one of the primary assessment tools for managed care services, but it includes few measures related to substance abuse treatment services. In March 1998, CSAT and other SAMHSA units with responsibilities in managed care began discussions with a small group of providers, researchers, federal and state policy makers, and representatives from public- and private-sector managed care organizations aimed at identifying additional measures that might be used. Plans call for further discussions with a broader group of participants to refine the proposed measures and promote consensus. Other efforts include the development by the Foundation for Accountability (1998) of measures to assess health plan services to detect and treat alcohol misuse.
CSAT is also working on improving the availability of data to support performance measurement. Three states are testing the feasibility of integrating data related to substance abuse treatment from separate data systems operated by the state Medicaid, mental health, and substance abuse agencies. The project aims to produce a flexible model for this process that other states can apply to their specific organizational and data system configurations. In other work, CSAT is helping states develop data sets and information systems for monitoring treatment outcomes (e.g., Harrison, 1995). Much of this work, however, is in the context of evaluation studies that are essential to establish an evidence base for effective treatment services, but are not necessarily designed to produce data on a routine basis for performance monitoring.
The Center for Substance Abuse Prevention (CSAP) within SAMHSA has been working with several states to test a minimum data set on prevention services. The states reached consensus on five indicators that could be used for performance measurement: youth use (age at first or early use, current use), youth attitudes toward use, parental attitudes toward youth use, actual or perceived availability of specific substances, and ability to comply with Synar Amendment provisions on controlling the sale of tobacco to youth (the Synar Amendment is discussed in Chapter 2). NASADAD's discussions on performance indicators noted the need for prevention indicators, but NASADAD is
deferring work in this area until greater progress has been made on the treatment indicators.
A broader national performance measurement activity related to substance abuse is being led by the Office of National Drug Control Policy (ONDCP) (1998). Two of the five ONDCP goals—to prevent drug use among America's youth and to reduce the health and social costs of drug use—specifically address health-related concerns. For each goal, several specific objectives have been established, and performance targets and associated measures have been chosen for both the goals and the objectives. For example, the goal of reducing the health and social costs of illegal drug use has six objectives: improve the drug treatment system, reduce drug-related health problems, promote a drug-free workplace, support training of the workforce for substance abuse services, develop medications and guidelines for substance abuse treatment, and support research and analysis to reduce the health and social costs of substance abuse. Among the performance measures adopted for this goal are the prevalence of drug abuse and the number of chronic drug users. Examples of the measures for the objective on improving the drug treatment system are the rate of full-time employment among adults completing substance abuse treatment programs and the average waiting time to enter treatment. Although ONDCP is focusing on national results, its activities may be sufficiently influential that states and communities will adopt measures that match those being used by ONDCP, thus achieving a greater consistency in measurement practices.
Process Guidelines as a Basis for Performance Measurement
As noted earlier, the panel concluded in its first report that performance monitoring requires the use of outcome measures and related process and capacity measures. The panel recommended that each process and capacity measure be accompanied by reference to published ''guidelines or other professional standards that describe the relationship between the process or capacity measure and the desired health outcome" (National Research Council, 1997:2). The panel recognized, however, that such guidelines are not always available. In such cases, the panel recommended specifying the assumed relationship between proposed process or capacity measures and a health outcome, and documenting the assumed relationship with empirical evidence and professional judgment. Where guidelines are lacking, additional research is needed to establish more precisely the relationship between program interventions and outcomes. The panel recommended that DHHS sponsor empirical outcome studies so that a more definitive list of recommended process and outcome measures can be developed.
Similar recommendations regarding the use of evidence-based performance measures emerged from the Institute of Medicine (1997a) report Improving Health in the Community. This report addresses the use of performance monitoring in
community health improvement activities. It advises giving priority to health improvement actions that can be linked to evidence of effectiveness, but cautions that such evidence is limited for many health issues. It may be appropriate for communities to address those issues, but they must consider carefully which actions will make the best use of their resources.
Thus both reports point out the need for evidence concerning processes that lead to better health outcomes. This evidence is needed to guide performance as well as to design better performance measures.
Guidelines for Personal Health Services
Generally speaking, the evidence linking processes and outcomes is more extensive and more fully documented for personal health services than for population-based services. For instance, the Guide to Clinical Preventive Services (U.S. Preventive Services Task Force, 1996), first published in 1988, provides a rigorous assessment of evidence concerning the effectiveness of personal health services for disease prevention, such as screening, immunization, chemoprophylaxis, and health counseling, that are provided to individuals in clinical care settings. Between 1989 and 1996, the Agency for Health Care Policy and Research (AHCPR) sponsored the development of a series of 17 clinical practice guidelines on topics such as the use of mammograms for breast cancer screening, diagnosis and treatment of depression in primary care, and smoking cessation (Agency for Health Care Policy and Research, 1998). Clinical practice guidelines for many areas of clinical care have also been developed by a variety of groups, such as medical specialty organizations, insurers, and health care organizations. Reports from the Institute of Medicine (1990, 1992) provide a framework for promoting the development and use of high-quality guidelines.
AHCPR was also directed to develop clinical performance measures related to clinical practice guidelines. An Institute of Medicine (1990) report elucidated the distinction between guidelines and performance measures and the connections between the two. AHCPR (1995) subsequently published a working group report that describes how to construct performance measures related to individual guideline recommendations by identifying a population of individuals to whom the recommendation applies and then collecting data that show whether these individuals received the recommended care. Performance measures of this type, and specifically those related to the U.S. Preventive Services Task Force guidelines for clinical preventive services, appear in sets of performance measures for managed care organizations such as HEDIS (National Committee for Quality Assurance, 1997).
AHCPR's activities have shifted from the development of practice guidelines to support for Evidence-Based Practice Centers to develop reports on the scientific basis for interventions to prevent, diagnose, treat, and manage common diseases and clinical conditions (Agency for Health Care Policy and Research,
1998). These reports are intended to assist both public and private organizations in developing and implementing their own guidelines and performance measures. Among the topics chosen for the first round of evidence-based reports, some relate to areas considered by the panel, such as pharmacotherapy for alcohol dependence and treatment of depression with new drugs.
Guidelines for Population-Based Health Services
Practice guidelines and evidence-based reports, such as those developed by AHCPR, the U.S. Preventive Services Task Force, and other groups, often cover such topics as immunization and screening services that lie in the overlap between clinical care and public health. The development of guidelines and evidence-based reports for the population-based services that are at the heart of public health programs is just beginning. One of the major challenges in this process will be the limited availability of evidence regarding the effectiveness of community-based interventions. Studies of these interventions are difficult to design and conduct (see below for further discussion of this point).
Perhaps the most significant effort in this area is being undertaken by the Task Force on Community Preventive Services, which is working on a Guide to Community Preventive Services (U.S. Public Health Service, 1998). This guide (expected to be published in 2000) will complement the Guide to Clinical Preventive Services by focusing on community-based prevention and disease control strategies. It will provide evidence-based recommendations for interventions and their implementation. Separate sections of the guide will cover changing risk behaviors (e.g., tobacco use, sexual behavior, physical activity); reducing specific diseases, injuries, and impairments (e.g., vaccine-preventable diseases, violent behavior); and addressing environmental and ecosystem challenges. A section will also be devoted to cross-cutting public health activities such as surveillance.
Research Needs for Practice Guidelines and Performance Measurement
For practice guidelines and performance measurement in both clinical care and public health services, evidence is needed not only on whether an intervention works—whether it is causally associated with desired outcomes—but also on how it works. Moreover, the use of performance measurement must be studied to assess its effect on health outcomes and program operations.
Evidence on how a successful intervention works can be used to guide the organization, operation, and improvement of the associated services, and the selection and use of meaningful process and capacity performance measures. For example, evidence shows that prevention and treatment can be effective in reducing substance abuse, but further studies are needed to clarify which elements of these interventions contribute in what degree to successful outcomes (see Institute of Medicine, 1996b; McLellan et al., 1996, 1997; Landry 1997). For popula-
tion-based interventions, studies must also distinguish between the factors that contribute to success at the individual level and those that lead to successful outcomes for the population as a whole. For example, evaluations of several community-based programs designed to reduce coronary heart disease have shown that the programs were often successful in reducing disease risk for many individuals, but generally were not able to reach a sufficiently large proportion of the population to alter community-level health outcomes (e.g., Elder et al., 1993; Fortmann et al., 1995; Murray, 1995; Luepker et al., 1996). In addition, the unanticipated strength of other influences that were acting to reduce risks for heart disease largely overwhelmed the community-level impact of the interventions being tested.
Gaps in the available evidence can help indicate areas in which further research is needed. Such research will require a variety of approaches, including qualitative analysis (e.g., ethnographic studies) and quantitative analysis using techniques such as randomized controlled trials, quasi-experimental outcomes research, epidemiological studies, and program evaluation studies. The community-based interventions that are part of many publicly funded health programs have proven particularly challenging to study (see, e.g., Koepsell et al., 1992; Connell et al., 1995). For example, because communities are constantly affected by many factors other than the intervention of interest, it is difficult to identify an appropriate comparison (i.e., a control group or counterfactual) for judging what would have happened without the intervention. Of necessity, comparisons are often based on the community itself before the intervention or on one or two other communities considered similar in key characteristics. However, unanticipated changes in social or economic conditions or unidentified differences among communities can limit the usefulness of these comparison cases. The small number of communities included in most studies further complicates the analysis by making it difficult to detect a statistically significant community-level effect from the intervention or to conclude that the intervention has had no effect.
A major research effort will be necessary to establish a firm scientific basis for practice guidelines for individual and community-based interventions and for appropriate uses of performance measurement to monitor the implementation of those guidelines. The research in even a single area, such as smoking cessation, must cover practices as diverse as television advertising to inform the public about the nature of the risk and about aids to smoking cessation, enforcement of regulations against selling tobacco products to underage individuals, and bans on tobacco use in public places. The scale of research will vary from testing small programs in a few schools and workplaces to comparisons involving whole cities or states. In order to be effective, these research studies must have access to data on other factors in the community environment (e.g., changes in health care guidelines, changes in the local economy, natural disasters) that may affect the outcome of program efforts.
Studies must also be done to assess the effects of using performance measurement and to ensure that the commitment of resources to this activity is appropriate. It should not be assumed that performance measurement will have the desired positive effect on health outcomes and program management. Ideally, evaluations would demonstrate that performance monitoring activities contribute to protecting the health of the population by promoting such practices as the identification of and intervention against important health problems, coordination of information resources across health-related agencies, and the use of program practices consistent with evidence-based guidelines. Also valuable, however, would be learning that certain performance monitoring activities were inappropriately focused on health outcomes of minor significance to a population or subpopulation, leaving major health problems undetected or unaddressed.
As the fruits of this research effort become available, it will become possible to design more effective health programs, as well as to design better systems for monitoring performance within these programs.
Publicly funded health programs cover a broad spectrum of programmatic areas and include a diverse mix of population-based and personal health services. A performance monitoring system provides a framework both for defining desired health outcomes in each program area and for focusing attention on the steps being taken by health programs to achieve those outcomes. The panel cautions that given the complexity of influences on health outcomes, those outcomes can rarely be credited to a single program or funding source. Although the panel has focused on performance measurement for public-sector accountability, performance measurement can help ensure accountability to either public or private investors in and purchasers of these services.
Consideration of program-specific issues is important in defining performance measures and data needs. Consensus must be established regarding the appropriate domains of measurement and the measures to be used. Nevertheless, a strictly programmatic perspective could discourage a more comprehensive approach to performance measurement that can capitalize on the interrelationships among programs and the overlapping aspects of their data needs. For example, performance measurement systems might be developed for elements of the public health infrastructure such as surveillance systems. These infrastructure components contribute to essential public health services that can support efforts across a range of categorical program areas. Performance measurement in certain program areas (e.g., environmental health, mental health, substance abuse) requires involvement with and an understanding of programs outside of the publicly funded health arena, such as air quality management and criminal justice systems.
The panel emphasizes that performance measurement should rest on a strong
evidence base if links are to be established between desired health outcomes and both program activities (represented by process and capacity measures) and intermediate outcomes (represented by changes in risk status). Looking only at data on health outcomes may provide little insight into the contributions to good results being made by health program activities or into changes in those activities that might be necessary to improve health outcomes. Performance measurement draws attention to and establishes accountability for processes and intermediate outcomes that are more clearly under the control of health programs. Performance measurement may also promote the needed development of and adherence to evidence-based best practices to guide steps to achieve desired health outcomes, including a much-needed emphasis on defining standards of practice in public health program areas. In many program areas, the evidence base for performance measurement must be strengthened through additional research, including research to evaluate the effectiveness of performance monitoring itself.
Agency for Health Care Policy and Research 1995. Using Clinical Practice Guidelines to Evaluate Quality of Care. Volume 2: Methods. AHCPR Pub. No. 950046. Rockville, Md.: U.S. Department of Health and Human Services, Public Health Service.
1998. Clinical Information. U.S. Department of Health and Human Services. http://www.ahcpr.gov/clinic (May 4, 1998).
Bourdon, K.H., D.S. Rae, B.Z. Locke, W.E. Narrow, and D.A. Regier 1992. Estimating the prevalence of mental disorders in U.S. adults from the Epidemiologic Catchment Area Survey. Public Health Reports 107(6):663–668.
Centers for Disease Control and Prevention 1995. Planned Approach to Community Health: Guide for the Local Coordinator. Atlanta, Ga.: National Center for Chronic Disease Prevention and Health Promotion.
1998. Self-reported frequent mental distress among adults—United States, 1993–1996. MMWR 47:325–331.
Connell, J.P., A.C. Kubisch, L.B. Schorr, and C.H. Weiss, eds. 1995. New Approaches to Evaluating Community Initiatives: Concepts, Methods, and Contexts. Washington, D.C.: Aspen Institute.
Eilbert, K.W., M. Barry, R. Bialek, and M. Garufi 1996. Measuring Expenditures for Essential Public Health Services. Washington, D.C.: Public Health Foundation.
Elder, J.P., T.L. Schmid, P. Dower, and S. Hedlund 1993. Community heart health programs: Components, rationale, and strategies. Journal of Public Health Policy 14:463–479.
Environmental Protection Agency 1998a. Administrator's Announcement on Reinventing Environmental Information. February 4, 1998. http://www.epa.gov/reinvent/onestop/cbmemo6.htm (April 28, 1998).
1998b. One Stop Program Strategy and Grant Award Criteria. February 10, 1998. http://www.epa.gov/reinvent/onestop/strategy.htm (April 28, 1998).
Fortmann, S.P., J.A. Flora, M.A. Winkleby, C. Schooler, C.B. Taylor, and J.W. Farquhar 1995. Community intervention trials: Reflections on the Stanford Five-City Project experience. American Journal of Epidemiology 142:576–586
Foundation for Accountability 1998. Measuring Quality. http://www.facct.org/measures.html (December 28, 1998).
Halverson, P.K., R.M. Nicola, and E.L. Baker 1998. Performance measurement and accreditation of public health organizations: A call to action . Journal of Public Health Management and Practice 4(4):5–7.
Harrison, P. 1995. Developing State Outcome Monitoring Systems for Alcohol and Other Drug Abuse Treatment. DHHS Pub No. (SMA) 95-3031. Rockville, Md.: Substance Abuse and Mental Health Services Administration, Center for Substance Abuse Treatment.
Institute of Medicine 1988. The Future of Public Health. Committee for the Study of the Future of Public Health. Washington, D.C.: National Academy Press.
1990. Clinical Practice Guidelines: Directions for a New Program. M.J. Field and K.N. Lohr, eds. Committee to Advise the Public Health Service on Clinical Practice Guidelines. Washington, D.C.: National Academy Press.
1992. Guidelines for Clinical Practice: From Development to Use. M.J. Field and K.N. Lohr, eds. Committee on Clinical Practice Guidelines. Washington, D.C.: National Academy Press.
1996a. Healthy Communities: New Partnerships for the Future of Public Health. M.A. Stoto, C. Abel, and A. Dievler, eds. Committee on Public Health. Washington, D.C.: National Academy Press.
1996b. Pathways of Addiction: Opportunities in Drug Abuse Research. Committee on Opportunities in Drug Abuse Research. Washington, D.C.: National Academy Press.
1997a. Improving Health in the Community: A Role for Performance Monitoring. J.S. Durch, L.A. Bailey, and M.A. Stoto, eds. Committee on Using Performance Monitoring to Improve Community Health. Washington, D.C.: National Academy Press.
1997b. Managing Managed Care: Quality Improvement in Behavioral Health. M. Edmunds, R. Frank, M. Hogan, D. McCarty, R. Robinson-Beale, and C. Weisner, eds. Committee on Quality Assurance and Accreditation Guidelines for Managed Behavioral Health Care. Washington, D.C.: National Academy Press.
Joint Commission on Accreditation of Healthcare Organizations (JCAHO) 1998. Oryx Fact Sheet for Health Care Organizations. http://www.jcaho.org/perfmeas/oryx/sidebar1.htm (July 24, 1998).
Kessler, R.C., K.A. McGonagle, S. Zhao, C.B. Nelson, M. Hughes, S. Eshleman, H.U. Wittchen, and K.S. Kendler 1994. Lifetime and 12-month prevalence of DSM-III-R psychiatric disorders in the United States: Results from the National Comorbidity Survey. Archives of General Psychiatry 51(1):8–19.
Koepsell, T.D., E.H. Wagner, A.C. Cheadle, D.L. Patrick, D.C. Martin, P.H. Diehr, and E.B. Perrin 1992. Selected methodological issues in evaluating community-based health promotion and disease prevention programs. Annual Review of Public Health 13:31–57.
Landry, M.J. 1997. Overview of Addiction Treatment Effectiveness. Pub. No. DHHS(SMA) 97-3133. Rockville, Md.: U.S. Department of Health and Human Services, Substance Abuse and Mental Health Services Administration.
Lasker, R.D., and the Committee on Medicine and Public Health 1997. Medicine and Public Health: The Power of Collaboration. New York: The New York Academy of Medicine.
Luepker, R.V., L. Rastam, P.J. Hannan, D.M. Murray, C. Gray, W.L. Baker, R. Crow, D.R. Jacobs, P.L. Pirie, S.R. Mascioli, M.B. Mittlemark, and H. Blackburn 1996. Community education for cardiovascular disease prevention: Morbidity and mortality results from the Minnesota Heart Health Program. American Journal of Epidemiology 144:351–362.
McLellan, A.T., G.E. Woody, D. Metzger, J. McKay, J. Durell, A.I. Alterman, and C.P. O'Brien 1996. Evaluating the effectiveness of addiction treatments: Reasonable expectations, appropriate comparisons. Milbank Quarterly 74(1):51–85.
McLellan, A.T., M. Belding, J.R. McKay, D. Zanis, and A.I. Alterman 1997. Can the outcomes research literature inform the search for quality indicators in substance abuse treatment? Pp. 271–311 in M. Edmunds, R. Frank, M. Hogan, D. McCarty, R. Robinson-Beale, and C. Weisner, eds, Managing Managed Care: Quality Improvement in Behavioral Health. Institute of Medicine. Washington, D.C.: National Academy Press.
MHSIP Task Force on a Consumer-Oriented Mental Health Report Card 1996. The MHSIP Consumer-Oriented Mental Health Report Card. Final report of the Mental Health Statistics Improvement Program (MHSIP) Task Force on a Consumer-Oriented Mental Health Report Card. Rockville, Md.: U.S. Department of Health and Human Services, Substance Abuse and Mental Health Services Administration, Center for Mental Health Services.
Miller, C.A., K.S. Moore, T.B. Richards, and C. McKaig 1994a. A screening survey to assess local public health performance. Public Health Reports 109:659–664.
Miller, C.A., K.S. Moore, T.B. Richards, and J.D. Monk 1994b. A proposed method for assessing the performance of local public health functions and practices. American Journal of Public Health 84:1743–1749.
Murray, D.M. 1995. Design and analysis of community trials: Lessons from the Minnesota Heart Health Program. American Journal of Epidemiology 142:569–575.
National Association of County Health Officials 1991. APEXPH: Assessment Protocol for Excellence in Public Health. Washington, D.C.: National Association of County Health Officials.
National Association of County and City Health Officials 1997. Community Environmental Health Assessment: Project Fact Sheet. Washington, D.C.
National Association of State Alcohol and Substance Abuse Directors 1998. Performance Measures. http://www.nasadad.org/permeas.htm (June 15, 1998).
National Association of State Mental Health Program Directors (NASMHPD) 1998. State Mental Health Directors Adopt a Framework of Performance Indicators for Mental Health Systems. Press announcement. May 1998. Alexandria, Va.
National Association of State Mental Health Program Directors Research Institute 1998a. Five State Feasibility Study on State Mental Health Agency Performance Measures. Final Report. Alexandria, Va.: National Association of State Mental Health Program Directors Research Institute.
1998b. NRI Performance Measurement System Home Page. http://dmhmrs.chr.state.ky.us/nripms/ (August 14, 1998).
National Center for Environmental Health 1996. Environmental Public Health Surveillance Workshops. U.S. Department of Health and Human Services, Centers for Disease Control and Prevention. http://www.cdc.gov/nceh/programs/ephs/wkshop/summary.htm (March 12, 1998).
National Civic League 1993. The Healthy Communities Handbook. Denver: National Civic League.
National Committee for Quality Assurance 1997. HEDIS 3.0/1998. Washington, D.C.: National Committee for Quality Assurance .
National Research Council 1994. Science and Judgment in Risk Assessment. Committee on Risk Assessment of Hazardous Air Pollutants, Board on Environmental Studies and Toxicology. Washington, D.C.: National Academy Press.
1997. Assessment of Performance Measures for Public Health, Substance Abuse, and Mental Health. E.B. Perrin and J.J. Koshel, eds. Panel on Performance Measures and Data for Public Health Performance Partnership Grants, Committee on National Statistics. Washington, D.C.: National Academy Press.
Office of Disease Prevention and Health Promotion 1997. Developing Objectives for Healthy People 2010. Washington, D.C.: U.S. Department of Health and Human Services.
Office of National Drug Control Policy 1998. Performance Measures of Effectiveness: A System for Assessing the Performance of the National Drug Control Strategy. Washington, D.C.: Office of National Drug Control Policy.
Public Health Foundation 1997. Environmental Health Data Needs: An Action Plan for Federal Public Health Agencies. Washington, D.C.: Public Health Foundation.
Public Health Functions Steering Committee 1994. Public Health in America. U.S. Department of Health and Human Services, Office of Disease Prevention and Health Promotion. http://web.health.gov/phfunctions/public.htm (August 31, 1998).
Richards, T.B., J.J. Rogers, G.M. Christenson, C.A. Miller, M.S. Taylor, and A.D. Cooper 1995. Evaluating local public health performance at a community level on a statewide basis . Journal of Public Health Management and Practice 1(4):70–83.
Smith, G.R., R.W. Manderscheid, L.M. Flynn, and D.M. Steinwachs 1997. Principles for assessment of patient outcomes in mental health care. Psychiatric Services 48:1033–1036.
Thacker, S.B., and R.L. Berkelman 1988. Public health surveillance in the United States. Epidemiologic Reviews 10:164–190.
Thacker, S.B., D.F. Stroup, R.G. Parrish, and H.A. Anderson 1996. Surveillance in environmental public health: Issues, systems, and sources. American Journal of Public Health 86:633–638.
Turnock, B.J., A. Handler, W.W. Dyal, G. Christenson, E.H. Vaughn, L. Rowitz, J.W. Munson, T. Balderson, and T.B. Richards 1994a. Implementing and assessing organizational practices in local health departments. Public Health Reports 109:478–484.
Turnock, B.J., A. Handler, W. Hall, S. Potsic, R. Nalluri, and E.H. Vaughn 1994b. Local health department effectiveness in addressing the core functions of public health. Public Health Reports 109:653–658.
Turnock, B.J., A. Handler, W. Hall, D.P. Lenihan, and E. Vaughn 1995. Capacity building influences on Illinois local health departments. Journal of Public Health Management and Practice 1(3):50–58.
U.S. Preventive Services Task Force 1996. Guide to Clinical Preventive Services: Report of the U.S. Preventive Services Task Force, 2nd ed. Baltimore, Md.: Williams & Wilkins.
U.S. Public Health Service 1998. Task Force on Community Preventive Services. The Guide to Community Preventive Services. http://web.health.gov/communityguide (January 1998).
Van Beneden, C.A., W.E. Keene, D.H. Werker, et al. 1996. A Health Food Fights Back: An International Outbreak of Salmonella Newport Infections Due to Alfalfa Sprouts. Abstract K46, 36th ICAAC (Interscience Conference on Antimicrobial Agents and Chemotherapy). New Orleans, La. September 18, 1996.