Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 47
4 Implementing Performance-Based Agreements In considering measurement of public health performance, it is important to make a clear distinction among four possible sources of effects: (1) specific federally financed programs; (2) other state and local public health programs; (3) programs operated by nonhealth agencies that can affect health outcomes; and (4) personal, social, economic, and other factors that are not related to any program intervention. When one or more factors may affect health outcomes, assessing their relative effects is critical to understanding the role of particular public health interventions. Several strategies are available to try to separate the effects of programmatic and nonprogrammatic variables. For example, to improve the comparability of measures of outcomes across states or over time, one can use statistical methods to adjust a state's measures for some outcomes for inherent differences in the composition of the state's population, economy, public health infrastructure, etc., and for temporary changes in aggregate conditions (e.g., phases of the business cycle). Although such approaches have been used to structure federal-state performance-based agreements in job training programs (Heckman and Hotz, 1989), little empirical research of this type has been done in areas of public health, substance abuse, or mental health. The current level of empirical knowledge of the relationship between public health interventions and outcomes is not sufficiently well developed to allow one to judge the effectiveness of a state's efforts to realize a given health outcome objective independent of all other factors. Making appropriate statistical adjustments for sociodemographic and other relevant factors is hampered by poorly understood relationships between individual factors and health outcomes, as well as the availability of timely and appropriate
OCR for page 48
data to make such adjustments in those cases in which the relationship between particular variables and outcomes has empirically been established. Another problem with making comparisons of outcomes across states is that comparable data are often not available. Similarly, accurately comparing the progress made by different states in realizing their process and capacity objectives can be extremely difficult if states choose different process and capacity measures or set different levels of accomplishment (i.e., performance objectives). Consequently, using cross-state comparisons of "performance" as the analytic basis for determining financial rewards or penalties for participating agencies may be very problematic. Consequently, the panel concludes that performance monitoring must make use of process and capacity measures to complement available measures of outcomes. Whenever process and capacity measures are used in performance agreements, the panel recommends that the relationships between them and desired health outcomes be explicitly related to professional standards, published clinical guidelines, or other references in the professional literature. Of course, process and capacity measures selected by a state for its performance agreements should possess the same statistical attributes as outcome measures: namely, they should be valid, reliable, and responsive. Although this "multimeasure" approach will not provide public officials or consumers with conclusive evidence of the effectiveness of particular interventions, it will allow interested parties to examine actions taken by agencies to realize their objectives and suggest whether changes in the magnitude or direction of their efforts should be considered. Certain public health outcomes of interest to the public, program administrators, and elected officials cannot be measured in the short term because of inadequate empirical knowledge, incomplete data, or insufficient time to observe change. Yet such short-term considerations should not inhibit states and localities from implementing optimal long-run strategies for addressing public health concerns. For example, a long-term perspective is needed to measure changes in behavior, such as, smoking, for which an evaluation of the outcome would require a 20-year perspective. Moreover, short-term monitoring of performance associated with specific federally funded programs does not provide an appropriate basis for assessing the full set of responsibilities of state and local health, mental health, and substance abuse agencies. Clearly, the individual diseases and health conditions that the panel studied for this report are only a subset of those diseases and conditions that are of concern to public health agencies around the country. Over the long term, the panel believes that it would be preferable to monitor the progress made by public health agencies in a more generic and less disease-specific approach. Until that is done, monitoring performance associated with federal funding of a particular program will be complicated considerably by the fact that funding support for programs in health agencies often comes from multiple sources. The federal mental health block grant, for example, represents
OCR for page 49
only about 4 percent of state mental health agency budgets, with state general revenues, private insurance, Medicaid, and local sources making up the balance. Any general outcome indicator of mental health status is highly unlikely to provide a valid estimate of how the mental health block grant, independent of other factors, affects the overall mental well-being of a states' population. In this regard, the illustrative capacity measures in each of the health areas do not, by themselves, reflect the full set of capacities needed by a state agency responsible for public health. Even if a state fully satisfied all of the capacity measures listed in this report, it would still need what might be called a "general readiness" capacity. For example, if a public health agency is suddenly challenged with a totally unpredictable public health threat, such as occurred when the AIDS epidemic arose or when cryptosporidiosis broke out in the Midwest, it must have the ability to respond. Under the current performance measurement system, there is no way to document such a general readiness capacity. Process and capacity measures have certain advantages over outcome measures: the data collection for them may be less expensive, they provide useful historical information, and they more appropriately address issues of program efficiency. The panel concluded that it is not possible to formulate a list of all the process and capacity measures that could cover every possible strategy that could be adopted by a state agency to meet an important health objective in the specific areas addressed in this report. Rather, the panel decided to list examples of commonly accepted strategies that are reasonable to use. For example, there are many effective strategies for reducing the spread of vaccine-preventable diseases; as long as a state adopts some reasonable strategy for increasing the immunization rate among at-risk persons, that state should be permitted to monitor the performance of its chosen strategy. If a state agency wants to use methods not on the panel's list of commonly accepted strategies, the agency should be required to explain the connection and strength of the relationship between the process or capacity and the desired outcome. As defined in a recent RAND report (Hill et al., 1995:9): [accountability is a] "process to help people who expect specific benefits from a particular activity (and whose support for the effort is essential to its continuation) to judge the degree to which the activity is working in their interests so that they might sustain it, modify it, or eliminate it. This view is particularly appropriate for performance-based agreements between states and the federal government. As discussed in this report, the technical limitations inherent in statistical measures of performance for the near term preclude using such measures as part of a hierarchical process in which one level of government holds another under tight supervision. A more appropriate and productive approach, given the current state of data availability, is the one embraced by the federal-state approach, which allows some flexibility for each state to
OCR for page 50
negotiate the specific performance measures that will most accurately reflect its particular programs and data. A potentially important use of performance indicators is to identify possible health problems that need attention in a particular state, geographic region, or subpopulation. With an agreed-upon combination of outcome, process, and capacity measures, it becomes possible to examine the need for technical assistance to those states that appear to have a problem in realizing specific objectives, because of inadequate resources, shifting demographics, or management problems. Using performance measures to signal the need for technical assistance is consistent with the National Performance Review initiative at the federal level and with the total quality management activities that are being undertaken by public and private organizations around the country. The data infrastructure required to support performance measures needs to be strengthened. This conclusion does not mean that the areas for improvement in data standards are unique to these programs. Indeed, public health data are often superior to those available for clinical decisions about treatment of individuals and even more so for many business and social service decisions. However, as indicated throughout this report, many of the potential health outcome measures are heavily dependent on a small number of state-federal surveys, including the Behavioral Risk Factor Surveillance System (BRFSS) and the Youth Risk Behavior Surveillance System (YRBSS). Unfortunately, these surveys do not cover all states and the methods used to collect these data vary greatly across states. Because existing resources are inadequate to support a consistent and comprehensive approach, the federal government would have to provide major increases in technical assistance and financial support for infrastructure to states if the state-federal data systems are to be able to provide the quantity and quality of data necessary to implement performance agreements. In developing data resources that will support such agreements for public health, substance abuse, and mental health, the panel recommends that DHHS work toward the goals listed below. For each goal, the panel identifies one or more steps that can be taken by the department toward that goal. Goal 1 Work with states to identify and develop common definitions and methods that will contribute toward standardizing measurements of health outcomes, processes, and capacity in public health, substance abuse, and mental health. Common definitions and measures are important in order to promote a common language for states, the federal government, and others to use when assessing progress toward societal goals. The panel is encouraged that the Substance Abuse and Mental Health Services Administration is planning a major effort for state data infrastructure development.
OCR for page 51
Suggested Steps Pursue strategies to improve collection and integration of public health, substance abuse, and mental health services data, particularly from capitated managed care providers. Support research to examine the relationship between interventions (process) and specific health outcomes. Goal 2 Encourage consolidation of data resources in ways that can efficiently support multiple programs (e.g., public health, substance abuse and mental health) and a broad range of purposes (e.g., performance monitoring, evaluation, and program operations). Suggested Step Encourage states and federal agencies to consolidate data resources as a means of increasing the efficiency of existing information systems and surveys. Goal 3 Identify and respond to states' priorities for data related to public health, substance abuse, and mental health policy and practice. Suggested Steps Convene state program directors on a regular basis to identify data needs and discuss progress in developing appropriate information systems and data surveys. Incorporate consideration of state data needs in the development and improvement of federal data resources. Goal 4 Identify and promote the data collection and analytic capabilities of states with regard to public health, substance abuse, and mental health. Suggested Steps Identify efforts at the state level that may serve as models for other states (and national) data resource development, such as confidentiality agreements that allow different state health agencies to share client information and still protect confidentiality so that these systems can be used for statistical purposes. Establish a grant program that will help create model state data systems. Provide additional resources to states to promote analytic and data gathering capabilities, such as helping states develop surveys of high quality and integrate them with administrative data, so that national statistics can be built on them, and developing BRFSS and YRBSS enhancements and analysis training.
OCR for page 52
Most important, the panel recommends that using performance measures to accurately assess the effectiveness of public health, substance abuse, and mental health programs be viewed as an on-going, long-term public administration activity, with a strong federal commitment to providing technical assistance and infrastructure support to its partners at the state and local level. Although much useful information can be gathered over the next several years on health outcomes, processes, and capacities, the full utilization of performance measures to improve programs must await the development of more and better empirical information on the effect of interventions on outcomes, as well as more complete, uniform, and timely data on those outcomes. Longer term research and the development of the information systems needed to support more adequate performance measures in these areas will be the subject of the panel's second report.
Representative terms from entire chapter: