Appendices



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 55
Appendices

OCR for page 55
This page in the original is blank.

OCR for page 55
Appendix A Performance Measures: Source Materials Association of State and Territorial Health Officials (ASTHO). Principles for Selecting Health Outcomes Objectives. Washington, DC. August 1995. ASTHO suggests specific criteria to be used when choosing health outcomes objectives, including comparability, reliability, specificity, accessibility, and relevancy. Barnow, B.S. The effects of performance standards on state and local programs. In Evaluating Welfare and Training Programs, C.F. Manski and I. Garfinkel, eds. Cambridge, MA: Harvard University Press. 1992. The author discusses the issues that should be considered in developing performance standards for the Job Opportunities and Basic Skills (JOBS) program. He defines the concept of performance management as it relates to employment and training programs and discusses how a performance measurement system can help channel behavior in the direction desired by the federal government. The author reviews the Job Training Partnership Act (JTPA) and applies the lessons learned under that program to the JOBS program. Barrett, T.J., B. Berger, and L. Bradley. Performance contracting: The Colorado model revisited. Administration and Policy in Mental Health, 20(2). 1992. This article describes the performance contracting model that was developed in 1986–1987 by the Colorado Division of Mental Health (DMH) and the Colorado

OCR for page 55
Association of Community Mental Health Centers and Clinics (CACMHCC). Several performance contracting issues were left unresolved, including the identification of performance indicators that focus on the quality of services provided and outcome measures, rather than process measures. Friedlander, D. Sub-Group Impacts and Performance Indicators for Selected Welfare Employment Programs. New York: Manpower Demonstration and Research Corporation. 1988. The purpose of this study of five mandatory welfare employment programs was to determine the programs' effects on employment and welfare status and to explore the validity of certain performance measures. The study found that unadjusted outcome measures (in this case, simple job-entry and case closure) are not valid indicators of performance as they tend to substantially overstate the true effects and that overstatement is not uniform across subgroups. Differences in program performance were determined more by characteristics of enrollees, local AFDC requirements, and local employment conditions than by the program. Hansen, J. S., ed. Preparing for the Workplace. Committee on Postsecondary Education and Training for the Workplace, Commission on Behavioral and Social Sciences and Education, National Research Council. Washington, DC: National Academy Press. 1994. One of the programs examined by the Committee on Postsecondary Education and Training was the Job Training Partnership Act (JTPA), a program that had a well-developed set of performance measures and goals specified in its statute. The experience of the JTPA performance measures was mixed: although they increased program credibility, they also fostered ''creaming" and encouraged short-term approaches. The committee found no evidence that programs with performance standards had greater effect than those without them. The committee concluded that outcome standards are not always preferable to process measures, especially when process measures focus on "best practices." Hatry, H., et al., Monitoring the Outcomes of Economic Development Programs: A Manual. Washington, DC: The Urban Institute Press. 1989. The Urban Institute and the states of Maryland and Minnesota undertook the task of designing a performance monitoring system for selected major economic development programs. The system was designed to provide feedback on service outcomes and service quality. Some of the other system criteria were frequent and timely performance information; a focus on outcomes accruing to clients; the need for nontraditional data sources; the inclusion of both intermediate and end

OCR for page 55
outcomes; an assessment of successful impacts caused by the state program; a means to compare performance over time and to previous years; and a design that minimizes the costs of data collection and management procedures. Among the limitations of the system is that performance information does not really explain why outcomes are the way they are; the best it can do is suggest reasons. Hatry, H. State and local government productivity and performance measurement. In State and Local Government Administration, J. Rabin and D. Dodd, eds. New York: Marcel Dekker. 1985. The author discusses how performance monitoring is used, including resource allocation, program planning, and contract monitoring. The author outlines several types of performance measures, including effectiveness and quality measures that assess the degree to which stated objectives are achieved and any negative consequences resulting from the service; efficiency measures, including input/output measures, work standards, and productivity indices. A major issue in performance measurement is how to assess whether what is being measured is good or bad. The author offers seven benchmarks that might be of use in making a determination: (1) existing standards, (2) previous performance, (3) the performance of similar units, (4) outcomes for different client groups, (5) performance in other jurisdictions, (6) performance of the private sector, and (7) preset targets. Hill, P.T., J. Harvey, and A. Praskac. Pandora's Box: Accountability and Performance Standards in Vocational Education. National Center for Research in Vocational Education, University of California, Berkeley. Santa Monica, CA: The RAND Corporation. 1993. In developing statewide performance standards and measures for vocational education programs, the authors state that all performance measures should be developed around at least three outcomes: learning, student success in the labor markets, and community-wide support for vocational programs. The authors argue that any accountability system must emphasize meeting local needs because only local actors can judge whether the program is operating successfully. Several impediments to effective state-local cooperation are noted, including the lack of resources and personnel to support an effective performance measurement development effort. Hoachlander, E.G. Systems of Performance Standards and Accountability for Vocational Education: Guidelines for Development. National Center for Research in Vocational Education, University of California, Berkeley. 1991. This monograph defines performance measures, offers guidelines as to what type of measures should be developed, what constitutes a good measure, the

OCR for page 55
types of statistical controls needed, and how best to proceed to develop a system of measures. Illinois Department of Public Health. Illinois Project for Local Assessment of Needs. Springfield. December 1993. This paper describes a new approach to the planning and delivery of public health services in Illinois. Practice standards and performance indicators are used to measure the core functions of public health. Local health departments are required to perform needs assessments every 5 years and develop a community health plan that addresses three priority areas. Block grant funds are used to support the planning activities that will result in capacity and needs assessments. Training and technical assistance are provided by state health department staff and by a team from the Southern Illinois University at Carbondale. Kamis-Gould, E. The New Jersey Performance Management System: A state system and uses of simple measures. Evaluation and Program Planning, 10:249–255. 1987. The New Jersey Performance Management System (PMS) was developed to ascertain whether the mental health agency performance was congruent with the state mandate and if intended results were produced. Four areas of performance were identified as critical: appropriateness, adequacy, efficiency, and effectiveness. These four dimensions were repeatedly subdivided, yielding operationally defined performance indicators available from routine reports. A task force was charged with the development of the indicators along with a dictionary of terms used in the PMS, statistical and accounting guidelines to assure uniform derivation of the indicators, and statistical decisions that define high and low performance. Larson, M.J., J.C. Buckley, and E.A. Elliott. Data Collection on Key Indicators for Policy, Alcohol, Illicit Drugs and Tobacco. Institute for Health Policy, Brandeis University. February 1995. This paper presents detailed profiles of 34 data collection activities that can be used to monitor the nation's progress in reducing the effects linked with drugs, alcohol and tobacco. Each profile contains information on the purpose of the collection, the sponsoring agency, the type of information gathered, and the survey sample design. Lewin-VHI, Inc. Key Monitoring Indicators of the Nation's Health and Health Care and their Support by NCHS Data Systems. Prepared for the Office of

OCR for page 55
Analysis, Epidemiology and Health Promotion, National Center for Health Statistics, Centers for Disease Control and Prevention. Fairfax, VA. April 1995. This report describes an evaluation of the adequacy and appropriateness of information collected by NCHS to support key indicators to monitor changes in the nation's health care system. A conceptual framework for classifying and evaluating indicators is outlined, and an ideal set of indicators is identified, as is a set that can be readily obtained. The report concludes with recommendations for next steps toward the implementation of a key indicator monitoring system. Minnesota Planning. Minnesota Milestones: 1993 Progress Report. St. Paul, MN. May 1994. In this report Minnesota rates its progress toward measurable goals set forth in 1992. Data are compared between 1990 and 1992, and targets to be accomplished by 1995 are presented. Each goal is graded with plus or minus sign to indicate the direction of progress. Goals are broad, cover a variety of areas—such as education, health, economic growth and the environment—and represent Minnesotans' hopes for the future of their state. Attached to each goal is a set of measurable indicators. The report also describes advances made in the collection of results-oriented data. National Association of County Health Officials. APEXPH: Assessment Protocol for Excellence in Public Health. Washington, DC. March 1991. APEXPH is a voluntary process for community self-assessment, improvement planning, and internal evaluation by local health departments. Its purpose is to enhance organizational capacity and strengthen a department's leadership in the community so that it can better achieve goals that are relevant to that community. This workbook provides guidance in assessing and improving organizational capacity and for working with the local community to make improvements to its health status. National Center for Health Statistics. Healthy People 2000 Review, 1994. Washington, DC: U.S. Department of Health and Human Services. 1995. Healthy People 2000 is a nationwide prevention and health promotion initiative to track and improve the nation's health through the 1990s. It is a framework to reduce preventable death and disability, enhance quality of life, and reduce disparities in health status among different population groups. Objectives are organized in 22 priority areas, each with its own set of objectives. This 1994 review gives a summary of the objectives and of the progress made in meeting them.

OCR for page 55
National Center for Health Statistics. Statistical Notes, Number 10. Washington, DC: U.S. Department of Health and Human Services. September 1995. One major goal of Healthy People 2000 is to reduce health disparities among Americans, including disparities between race and ethic groups. This newsletter, based on recommendations from the Committee 22.1, presents updates for previously published trends for the Health Status Indicators for the total population and presents comparisons by race and Hispanic origin using current national data. National Center for Health Statistics. Statistics and Surveillance, Number 6. Washington, DC: U.S. Department of Health and Human Services. January 1995. This newsletter discusses the development and recommendations of the Committee 22.1, a group of health professionals, who were convened by the Centers for Disease Control and Prevention to identify a consensus set of indicators that would meet the requirements of the Healthy People 2000 objective that calls for the development and implementation of a set of Health Status Indicators for federal, state, and local use. North Carolina Office of State Planning. Performance measures in the performance/program budget. Office of State Planning Newsletter 2(1). March 1995. This paper describes a key element of the North Carolina budget: the agency outcome measure, which is a results-oriented, numeric indicator of agency performance. Outcome measures are meaningful in the context of what an agency expects to accomplish and how it expects to reach its goals and are an integral part of an agency's strategic planning process and program management. Teams of staff members in each department developed a single measure for each outcome in the budget. Office of State Planning staff worked with the teams to ensure that the measures were statistically reliable and valid. Oregon Commission on Children and Families. Communities Investing in the Future, 1994 Comprehensive Planning Guide. Portland, OR. 1994. This document presents a step-by-step guideline designed to assist Oregon counties as they develop a mandated comprehensive plan for the well-being of all the children in that county. Steps include "community mapping" (needs assessment), selection of core benchmarks, identification of short- and long-term goals, and the development of a macro budget to implement the plan. Oregon Option. The Oregon Option: A Proposed Model for Results-Driven Intergovernmental Service Delivery. Portland, OR. July 1994.

OCR for page 55
This paper describes a proposed demonstration project that would be a partnership between Oregon and the federal government to redesign public service delivery based on measurable outcomes. Both the state and federal government will identify results to be achieved and the state will contract to achieve them. Both partners will agree to merge funding streams, renegotiate funding levels, eliminate costly restrictions and provide multi-year funding. The demonstration project will focus initially on the Oregon benchmarks that address economic and social concerns. Oregon Progress Board. Oregon Benchmarks: Standards for Measuring Statewide Progress and Institutional Performance. Report to the 1995 Legislature. Portland, OR. December 1994. This report describes the Oregon benchmarks, statutorily mandated, measurable indicators that the state uses to chart its progress towards broad strategic goals. Those goals are for its citizens to be educated, functioning people, working in well-paid jobs, and living in thriving communities. The benchmark system allows Oregon to have and pursue long-range goals while keeping tabs on the immediate problems. This is achieved by a focus on measurable outcomes as indicators of achievement rather than a focus on programs and expenditures. Special Study Panel on Educational Indicators. Education Counts: An Indicator System to Monitor the Nation's Health. Washington, DC: U.S. Department of Education. 1991. The panel was asked to define specific indicators to assess the nation's educational performance. It developed indicators around six issue areas: learner outcomes, quality of educational institutions, readiness for school, education and economic productivity, equity, and societal support for living. Several obstacles were identified as barriers to indicator development: (1) a lack of consensus on a conceptual model of an optimally functioning system; (2) problems with validity and reliability, e.g., large gaps in data sources; (3) the need to ensure fair comparisons among schools and students; (4) time and resource burdens imposed by the implementation of an indicator system; and (5) local pressures to produce desired statistical outcomes. Stantitis, T. Review and Analysis of Alcohol and Drug Abuse Performance Agreements in California, Michigan, and Oregon. National Association of State Alcohol and Drug Abuse Directors (NASADAD). Washington, DC. 1996. This project reviews the performance contracts used by California, Michigan, and Oregon to provide services to counties and community-based treatment programs. Reviewers found that each state handles their performance agreements

OCR for page 55
differently in the areas of contractor selection, data management, fiscal processes and methods of incentives or negative repercussions. The report suggested lessons that could be learned and recommended specific courses of action. Substance Abuse and Mental Health Services Administration. Developing State Outcomes Monitoring Systems for Alcohol and Other Drug Abuse Treatment. Treatment Improvement Protocol Series 14, DHHS Publication No. (SMA) 95–3031. Washington, DC: U.S. Department of Health and Human Services. 1995. Outcomes-based monitoring systems are broad-based efforts that link data from a variety of alcohol and drug programs. This treatment improvement protocol is designed to help state agencies to develop, implement, and manage their systems to improve treatment outcomes and increase accountability for substance abuse treatment funding. In addition to outlining the methods and technical concerns involved in developing a monitoring system, the volume offers a discussion of the political and ethical considerations. Substance Abuse and Mental Health Services Administration. Outcomes Monitoring Planning Group Meeting: Report on the Second Working Meeting of the Center for Substance Abuse Treatment. Washington, DC. November 16, 1995. This report of a meeting of state directors and their research and data experts continues the development of outcomes monitoring measures appropriate for state substance abuse agencies. The group reached consensus on a set of state-specific objectives in the areas of accessibility to services, process, outcomes, and societal impact. Recommendations were made on state and national data sources and on the steps needed to set up a feasibility study. U.S. General Accounting Office. Program Performance Measures: Federal Agency Collection and Use of Performance Data. Washington, DC: U.S. General Accounting Office, GAO/GGD-92-65. 1992. This survey of 103 federal agencies covered the current state of performance measures, i.e., to what extent measures have been developed and how are they used. Three-quarters of the agencies surveyed reported that they collected a variety of data to assess performance. Types of measures collected included inputs, workload, outputs, outcomes, and efficiency measures. U.S. Department of Health and Human Services. Guiding Principles for Selecting Performance Partnership Objectives. Draft technical document excerpted from Examples of Prototype Performance Partnership Objectives. Washington, DC. May 1995.

OCR for page 55
This document offers ten guidelines for selecting partnership objectives and urges that they draw on Healthy People 2000 whenever appropriate. Guidelines are designed to make the objectives understandable and measurable. U.S. Office of Management and Budget. Performance Partnerships: Guiding Principles. Washington, DC: U.S. Office of Management and Budget. 1994. OMB suggests that performance measures be a mix of outcome and output measures and be mutually developed by those involved along with the federal government. Measures need to specify performance information, data sources, acceptable levels of precision and accuracy, domain of measurement, frequency of data collection, and period of time covered. Measures should be refined over time. Washington State Department of Health. Public Health Improvement Plan, A Progress Report. Olympia, WA. March 1994. This report describes the Washington Public Health Improvement Plan, a blueprint to improve the health status of the state through prevention and capacity development for public health services. It is based on specific objectives across a range of public health activities and lists outcome standards for each area. The report also details what capacity is needed to meet these standards as well as other interventions needed to improve the health status of Washington's citizens.

OCR for page 55
This page in the original is blank.