Putting Performance Measurement in Context
The panel views performance measurement as a management and oversight tool intended to improve desired outcomes by focusing attention on quantifiable measures of those outcomes, on changes (or lack of change) in those measures, and on the processes and capacity being applied to achieve the outcomes. The principal aim of this report is to address technical and policy issues associated with the data and information systems needed to provide effective support for performance measurement for publicly funded health-related programs. Consideration of these issues must, however, take into account the broader policy context in which performance measurement is used. It is within this context that performance goals are defined and then translated into performance measures, for which information systems must be able to produce data of the needed scope and quality. This chapter reviews the characteristics and uses of performance-based management and accountability systems, some of their strengths and limitations, and examples of their application in federal and state government settings and in the private sector for health care organizations. It also notes ways in which such mechanisms rely on data that are already available and some of the potential limitations of those data for accurately assessing performance.
Use of Performance Measurement in Accountability Systems
As noted in Chapter 1, the movement to increase the accountability of organizations and programs for achieving desired outcomes, particularly in the public sector, has led to renewed interest in performance measurement. This approach
to the management of public programs and policies is believed to be superior to other management approaches that are based on micromanagement, process controls, and oversight of resources and activities, and that place little emphasis on results (Osborne and Gaebler, 1992; Wholey and Hatry, 1992).
Performance-Based Accountability Systems
As defined by the National Performance Review (1993), the guiding principle of governmental performance-based accountability systems is the provision of increased flexibility to lower-level units of government, or ''partners," in exchange for increased accountability for results. This increased flexibility may take the form of consolidation of funding streams, elimination of micromanagement, devolution of decision making, or a reduction in bureaucratic paperwork and reporting. Increased accountability for results means that partners focus on outcomes, rather than inputs and processes, as the basic measures of success. Some accountability systems may use such measures to allocate resources or apply incentives to reward desirable outcomes.
Performance-based accountability systems are being established in the public sector between the legislative and executive branches of governments and between levels of government. The Performance Partnership Grants (PPGs) that were proposed by the U.S. Department of Health and Human Services (DHHS) for several of its state block grant programs are an example of a system intended to operate between levels of government. Such arrangements can be established between federal and state, federal and local, or state and local units of government. Even in the absence of formal PPG legislation, performance partnership agreements can be expected to function in this manner.
Public-sector agencies are extending performance-based accountability into their relationships with the private sector through mechanisms such as performance-based contracting for the delivery of services. For example, state substance abuse or mental health agencies often contract with private providers to deliver publicly funded services. With performance-based contracts, those providers can be held responsible for certain overall outcomes among the people they serve. Performance-based accountability can even be extended to interrelationships in a broad community context. The community health improvement process described by the Institute of Medicine (1997) relies on performance measurement to monitor progress toward health improvement goals and ensure accountability of specific segments of the community for the processes and outcomes for which they have accepted responsibility.
These management and accountability arrangements between and within units and levels of government can be viewed as a substitute for the private sector's market mechanism (see Wholey and Hatry, 1992). In the private sector, it is assumed that in the long run, the discipline of the marketplace will motivate firms to strive for cost-efficiency and maximization of returns to stockholders.
Measures such as profits, rates of return on investments, and market share can be used to assess a unit's success at maximizing efficiency. Furthermore, market forces and signals provide the sorts of feedback managers need to achieve such objectives.
In contrast, the public sector is not governed by the economic forces of competition and profits. Residents who find their state services inadequate or overpriced generally cannot choose (unless they move) to use the services of another state the way a consumer can choose to buy a competing product. To judge the performance of the public sector, stakeholders must rely on other, noneconomic indicators related to human, social, and natural "capital" that must be preserved and invested wisely. There is less consensus on what these indicators should be than on the economic indicators of business performance.
The performance partnership mechanism is one of a much broader class of performance-based systems that have been considered and used in public-sector management over the past 30 to 40 years.1 These systems include performance-based accountability, performance-based budgeting, performance monitoring, and benchmarking systems. While differing in various ways, all are predicated on a common view that government agencies and organizations need to be more accountable to legislatures, and ultimately to the public, for the resources they receive, and that this accountability should be based on improvements in the dimensions of well-being that such agencies seek to affect.
Problems encountered in earlier efforts to apply performance-based systems offer lessons for current performance-based approaches (Florida Office of Program Policy Analysis and Government Accountability, 1997; U.S. General Accounting Office, 1997c). The extensive information needs of those earlier efforts were not adequately supported by the available record-keeping systems, staff expertise, and computer and information resources. Thus substantial staff time was necessary to meet reporting requirements. Despite this investment of staff time and other resources in producing the required reports, these efforts had little observable impact on funding decisions. The performance-based management approaches used in the past often lacked key leadership support in the executive and legislative branches of government. Furthermore, the analytic character of these approaches made them insensitive to the political aspects of deci-
sion making. If performance measurement is to succeed, it must avoid problems such as these.
Operation of Performance-Based Systems
While various performance-based systems differ in their particulars, there appear to be several key steps in the operation of such systems. These steps are briefly reviewed here.2 In Box 2-1, these steps are illustrated by a hypothetical state immunization program and performance measures suggested in the panel's first report.
Step 1: Develop an explicit set of goals and objectives and articulate strategies for achieving them. The first step for a performance-based system is to delineate clearly the goals and objectives of an agency or program. These goals and objectives are often captured in a strategic plan that includes a mission statement and a discussion of how the goals and objectives will be achieved. Furthermore, a strategic plan may outline the resources that will be used to meet these goals and objectives; it may explicitly stipulate the necessary expenditures as well. As noted earlier, one would expect the goals and objectives to focus on outcomes, not process. Such is certainly the case for the two recent federal initiatives in this area—the National Performance Review and the Government Performance and Results Act (GPRA).
A key part of the process of setting goals and developing strategic plans is identifying and involving a program's stakeholders and balancing their potentially competing interests (e.g., reduce costs, increase services, improve quality, replace one activity with another). Much of the recent literature (see, e.g., Wholey and Hatry, 1992; U.S. General Accounting Office, 1996) has emphasized the importance of involving all stakeholders—including policy makers, agency administrators, local program operators, clients, and in some cases members of the public—in the goal setting and planning processes.
In the case of the performance partnership agreements addressed by this panel, granting agencies (e.g., various DHHS agencies) and grantee agencies (e.g., state and local agencies or organizations) may each have their own goals and strategic plans. Negotiated agreements are the mechanism for identifying the particular set of goals and objectives against which grantees' performance will be assessed.
Step 2: Develop and implement strategies for measuring performance. A performance-based system must have a means of assessing progress toward stated goals. This method of assessment is provided by translating program objectives into measures of performance: quantitative or qualitative characterizations of
Box 2-1 Steps in the Operation of a Performance-Based Management System: Example of a Hypothetical State Immunization Program
Step 1: Develop goals and objectives and strategies for achieving them.
The strategic plan for a state's immunization program might have as a goal reducing vaccine-preventable illness by (1) increasing the age-appropriate immunization rates among children at 2 years of age and (2) increasing influenza immunization rates among adults aged 65 and older. The plan might call for achieving specific levels of immunization coverage at some point in the future (e.g., in 3 years). The strategies for achieving these goals might include enhancing a childhood immunization registry system to generate reminder notices for parents and creating an immunization awareness program to reach older adults.
Step 2: Develop and implement strategies for measuring performance.
The performance of the immunization program might be assessed using outcome measures, such as the incidence of measles, rubella, and other vaccine-preventable diseases among children and the incidence of influenza-related deaths among older adults; risk status measures, such as the age-appropriate immunization rates among 2-year-old children or the influenza immunization rate among older adults; and process measures, such as the proportion of parents with children under age 5 who report receiving an immunization reminder notice and the proportion of older adults living in the community who report having seen information
outcomes to be achieved if those goals are to be realized, processes to be followed in efforts to achieve those outcomes, or capacity available to support those efforts. Although measures based on outcomes are a high priority, a mix of measures will generally be needed to assess the performance of a program from various stakeholder perspectives (e.g., program managers, funders, consumers). For programs that affect outcomes over the long-term (e.g., chronic disease prevention) or that guard against possible but rare adverse events (e.g., water treatment), it may be more meaningful to focus on measures that track risk reduction activities and capacity to respond than on outcome measures that would generally show little change in the short term and few differences from program to program. This panel's first report (National Research Council, 1997) provides an extensive discussion of the categories of measures deemed relevant for health-related programs (see Chapter 1 of the present report for a brief review of these categories).
While the process of measuring performance, especially in terms of relevant
on where they could receive an influenza immunization. As a capacity measure, the state might use the proportion of children under age 2 that are included in the immunization registry. The data for these measures would be obtained from several sources. Measures of disease incidence might be limited to those diseases that the state has designated as reportable and for which the state health department collects data. Influenza-related deaths would be tabulated by the vital records system. If reasonably complete, an immunization registry could produce data on immunization rates among young children. A survey (e.g., the Behavioral Risk Factor Survey) would probably be the most effective way to obtain data on influenza immunizations among older adults, immunization reminders received by parents, and awareness of immunization services.
Step 3: Use performance information to improve management practices or resource allocation.
Persistently low or decreasing immunization rates would be a signal to examine the operation of the immunization program more closely. The process and capacity measures selected in Step 2 might reveal program weaknesses that could be remedied, such as improving the completeness of an immunization registry's coverage of young children in the state. Finding that the selected process and capacity measures were at desired levels would signal the need to examine other factors that might account for poor performance. For example, the year's influenza vaccine might have been less effective than usual because of the emergence of an unanticipated viral strain.
outcomes that should be influenced by program activities, is likely to vary from one agency or context to another, the literature on these systems offers general guidance (e.g., Wholey, 1983; U.S. General Accounting Office, 1996). In its work on GPRA, for example, the U.S. General Accounting Office (GAO) (1996:24) has noted the importance of establishing "clear hierarchies of performance goals and measures" that reflect the roles and responsibilities at varying program levels, from planning and oversight to grass-roots delivery of services. GAO comments that the performance measures should be tied to program goals and, to the extent possible, demonstrate the results of program actions that are directed toward achieving those goals. At the broadest policy and management levels, a limited set of measures that focus on key outcomes and actions should be used. Including too many measures at this level can divert attention from key outcomes without improving the usefulness of the performance information as a management tool. These measures must, however, be chosen carefully, espe-
cially if they are to be used to monitor a diverse set of activities, such as those likely to be encompassed by federal block grants to states, since activities that are represented in the set of measures are likely to be seen as having a higher priority than those not represented. A greater number of measures may be appropriate at the more detailed operational levels, such as within a state or community program. Although the specific measures are likely to differ across organizational or operational levels, they should be related to each other through their relationship to activities that contribute to the achievement of program goals.
Once measures have been selected, the necessary data must be collected and used to calculate those measures. For some measures, it may be possible to rely on existing data sources, while other measures may require new data collection or data processing procedures. Meaningful interpretation of performance results may also require data on other factors not directly related to program activities or goals but that can affect the environment in which a program is operating, such as widespread disease outbreaks (e.g., epidemic levels of influenza), natural disasters, or changes in the local economy (e.g., increased unemployment because of layoffs). The completeness, accuracy, consistency, and timeliness of the data must be assessed, but such assessments must be made in light of the trade-off between the benefits of improving the quality of the data and the cost of doing so. Issues related to producing performance data are at the heart of this report and are addressed at greater length in subsequent chapters.
Step 3: Use performance information to improve management practices or resource allocation. The next step for a performance-based system is to apply the information obtained from performance measurement to assess progress toward desired outcomes. If progress is not adequate, performance information can inform steps taken to improve the likelihood of achieving outcome goals in the future. Some policy makers would like to use performance measures to determine resource allocation, directing additional resources to activities demonstrating "good" performance or reducing resources to those demonstrating "poor" performance. As discussed earlier, however, the panel cautions that use of performance measures in an arbitrary, formulaic approach to resource allocation generally is not appropriate because few performance measures can adequately and unambiguously represent the complex mix of factors that determine outcomes. Only if the measures are based on a definitive causal relationship between capacity and process and the outcome of interest, and if experience has demonstrated that they do not stimulate adverse unintended consequences, might it be reasonable to consider using them as a direct determinant of resource allocation decisions.
The element of accountability that is central to such systems implies that performance data should be reported in a form that is accessible and useful to a program's stakeholders. It is critical to recognize that performance measurement is not an end in itself; it is a tool that should be used in a continuing process of assessment and improvement.
Applications of Performance Measurement
Information obtained from efforts to measure performance can be used to various ends. This section highlights four potential ways such information might be used, particularly in the context of publicly funded health-related programs. The first two reflect a monitoring and reporting function for a performance-based accountability system. Accountability comes somewhat indirectly through the reactions of administrators and constituents in response to information on how an organization is performing. The latter two applications involve the use of performance information to influence program management and resource allocation more directly. These four applications of performance measurement information are not mutually exclusive, but they do differ in their implications for those whose performance is being measured.
- Inform various stakeholders (e.g., administrators, public officials, and citizens) of progress toward stated program goals. Performance measurement information can be used to compare actual performance with performance targets. Performance data can also be used to monitor progress over time or to compare the progress of multiple groups toward agreed-upon goals and objectives. For such comparisons to be appropriate and meaningful, the performance measurement information must be generated in ways that produce comparable data. For example, a state legislature might want to compare the state's immunization rates for 2-year-olds with the national target of 90 percent that was established in Healthy People 2000 (U.S. Department of Health and Human Services, 1991). The state might also want to assess progress toward this goal by local immunization programs across the state. Consumer-oriented reporting of performance information is illustrated by "report cards" on health care provider performance, such as that developed in conjunction with the Mental Health Statistics Improvement Program (MHSIP) (MHSIP Task force on a Consumer-Oriented Mental Health Report Card, 1996).
Assess program effectiveness. Performance measurement can contribute to program management and accountability by serving as a primary method of surveillance for program effectiveness. It provides a framework to guide the systematic collection of information on desired outcomes and on the program activities that are specifically expected to contribute to the achievement of those outcomes. This performance information can provide an indication of how well programs are working. In addition, an ongoing performance measurement system can often provide data for assessing the effect of changes in other factors or programs related to health services (e.g., the growth of managed care).
This panel's first report (National Research Council, 1997) advised that health-related performance measurement must include a mix of outcome, risk reduction, process, and capacity measures. The use of risk reduction measures to represent intermediate outcomes is important because, as noted earlier, many
Improve program performance. By providing sentinel markers of program effectiveness, performance measurement can guide program managers and policy makers in steps designed to improve program performance. Performance measurement can help focus the attention of practitioners, researchers, and policy makers on best practices. Attention to and accountability for processes and intermediate outcomes that are under more direct programmatic control than longer-term outcomes will lead to a much-needed emphasis on defining standards of practice in health program areas. From the external perspective of a funding agency, data showing poor performance may signal a program's need for increased technical assistance and for guidance in identifying appropriate practices and determining how they can be implemented.
Incentives and sanctions are also used to encourage improved performance, but may prove difficult to use effectively in the public sector (Florida Office of Program Policy Analysis and Government Accountability, 1997). They can range from generally intangible positive (or negative) recognition for progress toward stated goals to specific and quite tangible financial rewards (or penalties) based on measured performance. The aim is to motivate program staff or communities to achieve desired outcomes (e.g., immunization rates, access to services, desired community behaviors) by comparing performance measurement information with targets set for program goals.
As noted earlier, the private sector often relies on the prospect of financial rewards or penalties (e.g., profits, loss of market share) to create an incentive for good performance. For public-sector programs that do not operate in a competitive, market-based environment, financial penalties may only make it more difficult to improve performance. Instead other, nonfinancial tools can be used to improve performance. For example, continued poor performance that can be attributed to program mismanagement may call for penalties in the form of increased oversight, reduced flexibility, and more directive program management by the funding agency.
The panel emphasizes that in the abstract, fear of sanctions may be an incentive toward improvement, but the application of sanctions will not, by itself, improve performance. Some observers suggest that fears by staff in state agencies that poor performance results will lead to penalties rather than assistance to improve performance can be a barrier to effective use of performance measure-
health outcomes are too far "downstream" from program activities for direct causal linkages to be established or for those outcomes to be observed soon enough to be useful for program management. In general, routine and direct measurement of program processes and outcomes is not part of current practice at the state and local levels.
As early, real-time indicators of program effectiveness, performance measures can signal matters warranting more attention. Additional analysis is then needed, however, to define the elements of a successful innovation or diagnose the source of a problem.
- ment (Wholey and Hatry, 1992; U.S. General Accounting Office, 1994; Florida Office of Program Policy Analysis and Government Accountability, 1997).
- Guide resource allocation and regulation of activities. Performance measurement information is also being used for allocation of budget resources or as the basis for regulatory control to ensure a minimum acceptable performance. For example, some states have adopted performance-based budgeting systems under which decisions regarding agency budgets are directly linked to measures of agency performance (see below for additional discussion of state systems). The panel suggests that the use of performance measures in this manner for health-related programs is appropriate only when clear standards or substantial experience is available to guide actions in a manner that will avoid unintended adverse consequences. For example, linking funding for substance abuse treatment services to rates of treatment completion might discourage acceptance of clients who appear less likely to remain in treatment.
In general, the panel believes that this process should not be as simple as rewarding or penalizing performance by providing or taking away resources. Indeed, as suggested earlier, such an approach may be counterproductive. Take, for example, a county with low immunization rates that have failed to improve over time. This situation could be the result of program mismanagement and poor decision making, or it could reflect especially intractable or unique local problems, such as continuing in-migration of families with underimmunized children. In either case, shifting resources away from this county to others with "better" performance would be unlikely to result in improved immunization rates. At the same time, however, a more complete understanding of program performance and its relation to outcomes will support a more rational, albeit more complex, budgeting and resource allocation decision making process.
The panel is concerned that some legislative actions to mandate performance standards and impose financial penalties for failure to comply make poor use of the performance measurement tool. For example, the 1992 Synar Amendment is intended to reduce tobacco consumption among youths by reducing their access to tobacco products. This provision requires that each state reduce to less than 20 percent the proportion of inspected sales outlets that violate the ban on the sale of tobacco products to those under age 18. States that repeatedly fail to meet the required level of performance face the loss of up to 40 percent of their Substance Abuse Block Grant funds (Substance Abuse and Mental Health Services Administration, 1998). Complicating the federal-state relationship on this issue are regulations issued by the Food and Drug Administration (1996) that make the sale of tobacco products to minors a violation of federal law, and preempt most state and local laws on this matter.
The panel sees at least four problems with the Synar Amendment's approach to performance-based accountability. First, the performance requirement was established without states having the opportunity to participate as partners in identifying the performance measure to be used or the level of performance to be
achieved. Second, the financial penalty reduces the resources available to address prevention and treatment of all forms of substance abuse, not just youth tobacco use. Third, the performance requirement and its associated penalty are not related to the typical program goals and strategies of state substance abuse agencies. Few of these agencies have any enforcement authority regarding tobacco sales, and states are specifically prohibited from using their Substance Abuse Block Grant funds for any enforcement activities other than inspections of sales outlets. Finally, the penalty is based on a single process measure of performance (the proportion of sales outlets violating the ban on sales of tobacco to minors) without an assessment of the desired (intermediate) outcome—a reduction in tobacco use among minors—or conclusive evidence of a causal link between process and outcome (see Rigotti et al., 1997).
Examples of Performance Monitoring and Accountability Systems
The PPG proposal that served as the impetus for the work of this panel is but one application of the performance monitoring and accountability systems that are currently in use in a variety of settings. Perhaps the most prominent governmental example is GPRA, which requires all federal executive branch agencies to implement a strategic planning and performance measurement process. Various federal programs that provide funding to states also include performance reporting requirements. The Maternal and Child Health Bureau (MCHB) of the Health Resources and Services Administration in DHHS (1997) has incorporated performance measures into the reporting requirements for the agency's block grant. The new welfare block grant program, Temporary Assistance for Needy Families (TANF), links both penalties and bonus funds to state performance in specified areas. States will also be required to develop and report on performance measures in connection with the Children's Health Insurance Program, a major initiative to extend health insurance to currently uninsured children in low-income families who are not eligible for Medicaid.3 And many state governments are adopting performance-based management and budgeting systems. In the private sector, interest in assessing and improving the quality of health care is prompting the development of performance measurement systems for health plans, health care facilities, and individual health care providers. Some of these examples of the use of performance monitoring and accountability systems are reviewed briefly below.
Information about the State Children's Health Insurance Program is available from the Health Care Financing Administration at <http://www.hcfa.gov/init/children.htm>.
Government Performance and Results Act
In 1993, Congress passed GPRA (P.L. 103-62) as part of an effort to improve the management and accountability of federal agencies. GPRA requires each agency to develop a strategic plan covering a period of at least 5 years, as well as annual performance plans and annual performance reports. Because GPRA requires major changes in agency management activities, its implementation is being phased in over several years. Agencies were required to submit their first strategic plan to the Office of Management and Budget (OMB) and Congress in September 1997. Annual performance plans were submitted beginning in 1998 for fiscal year 1999, and the first performance reports are to be issued in March 2000.
Each of the agency reporting requirements contributes to the overall performance-based management system envisioned under GPRA. The agencies' strategic plans are the starting point for defining program goals and outlining strategies for achieving those goals. Agencies are expected to consult with Congress and other stakeholders to ensure that their views are taken into consideration. The annual performance plan translates the broader, longer-term goals of the strategic plan into more operational goals for the coming year. Included in the annual performance plan are the performance measures the agency will use to assess progress toward its goals. In the annual performance report produced the following year, an agency is to use data collected for its performance measures to compare actual performance against the program goals. The aim over time is for these reports to include data for the reporting year plus the 3 prior years.
As agencies have been working with OMB and Congress to implement GPRA, GAO and a panel of the National Academy of Public Administration have produced several reports reviewing progress, noting problems, and recommending steps to support the implementation process (e.g., National Academy of Public Administration, 1994, 1998; U.S. General Accounting Office, 1996, 1997a,b). All agree that GPRA provides a sound framework and has the potential to bring substantial improvements to the management of federal programs. There are, however, significant challenges to be overcome if GPRA is to be successful.
In particular, GAO (1997a) has identified several problem areas that are hindering agency progress toward implementing the provisions of GPRA. The initial strategic planning step has proven difficult for some agencies because fragmented or overlapping programs are not easily translated into clear statements of agency mission and strategic goals. For some agencies, the challenge lies in reconciling competing or conflicting policy demands. GAO also suggests that there has been limited progress in the adoption of a results-oriented organizational culture to guide agency management decisions.
Of particular relevance to the work of this panel are GAO observations regarding the use of performance measures. Agencies are finding it difficult to measure performance on an annual basis when the outcome of program activities
cannot be determined within a single year, or the federal contribution to a result is only one of many influences on the outcome of interest. As the present panel did in its first report, GAO suggests compensating for these factors by using measures of intermediate results, using multiple measures, and working with stakeholders to agree upon measures to be used. Formal program evaluations can provide additional insight, but since they require substantial planning, time, and funding, it will be necessary to use them selectively (U.S. General Accounting Office, 1997b). GAO has also found that agencies lack suitable data for some performance measures because data for this purpose are not collected or are not of acceptable quality. For some agencies, a lack of baseline data has made it difficult to establish annual performance goals.
Use of Performance Measures in the Maternal and Child Health Services Block Grant Program
Within DHHS, MCHB has responded to the new GPRA requirements in part by introducing a performance measurement component into the Maternal and Child Health Services Title V Block Grant to States Program (see Maternal and Child Health Bureau, 1997, 1998a). Originally authorized in 1935 by Title V of the Social Security Act, this block grant provides federal funds to assist states in developing and operating programs intended to improve the health of pregnant women and children and provide services for children with special health care needs, including children with developmental disabilities or chronic illnesses. Four broad categories of services are supported: direct health care services (e.g., prenatal care), enabling services (e.g., case management, transportation), population-based services (e.g., immunizations, lead screening), and infrastructure-building services (e.g., needs assessments, information systems). States are required to match the block grant funds at a rate of $3 for every $4 in federal funds. Federal funding for fiscal year 1999 is $580 million.
Since 1989, state accountability for the use of Title V funds has been linked to reporting on key maternal and child health indicators, as well as budget and expenditure data. Prompted by GPRA's new requirements for performance-based accountability and reporting by federal agencies, MCHB has revised the states' block grant reporting requirements to include the use of performance measures. In addition to providing a more effective indication of the impact of Title V programs at the state level, the state performance reports are expected to furnish data that MCHB will need to prepare GPRA performance reports for Congress.
Beginning with fiscal year 1998, each state must report on 6 health outcome measures (perinatal mortality, infant mortality, neonatal and postneonatal mortality, child death rates, and a measure of the disparity between black and white infant mortality rates), 18 ''national" performance measures to be used by all states, and 7–10 additional performance measures selected by the state and approved through negotiation with MCHB. The negotiation process also includes
reviewing annual performance targets to be set by states for each measure. Examples of the 18 national measures are the percentage of a state's children with special health care needs who have a medical/health home,4 the birth rate for teenagers aged 15–17, the percentage of newborns screened for hearing impairment before hospital discharge, and the percentage of very low birth weight infants delivered at facilities for high-risk deliveries and neonates. Within this set of measures, all four categories of grant-supported services are represented.
Although the Title V Block Grant was not part of the original PPG proposal, MCHB has drawn on the PPG model in developing its performance measurement program. The Bureau has worked in partnership with the states, through the Association of Maternal and Child Health Programs and other channels, to reach agreement on the outcome measures and 18 national performance measures to be used. The outcome measures represent long-term health improvement goals to which Title V programs should be contributing but generally do not control. The national and state-selected performance measures are a mix of capacity, process, and risk factor measures and are linked more directly to program activities and shorter-term goals. This use of a mix of measures is consistent with the approach advocated in this panel's first report (National Research Council, 1997).
MCHB has also recognized that performance reporting should take into account differences among the states in their health needs and priorities and in the role Title V programs may play in meeting those needs. The use of state-selected measures allows states to emphasize program activities of special interest or importance. MCHB reviews these measures with each state to help ensure that they are practical and effectively link program activities and outcome goals. The review also gives MCHB an opportunity to increase the cross-state comparability of these data by encouraging states that select similar measures to adopt identical definitions of the numerators and denominators for those measures. States have access to technical assistance for their performance measurement work through MCHB offices and outside consultants. In addition, an MCHB systems development initiative is providing state grants of up to $100,000 that can be used to support information systems activities related to Title V performance measurement (Maternal and Child Health Bureau, 1998b).
Reporting Requirements for the Temporary Assistance for Needy Families Program
Although not a health program, the new federal program for public assistance to needy families is another example of the shift from categorical to block funding that gives states greater flexibility in return for accountability for their performance. The TANF program was enacted under the Personal Responsibility and Work Opportunities Reconciliation Act of 1996 (P.L. 104-193) (see Administration for Children and Families, 1998a). TANF replaces the Aid to Families with Dependent Children (AFDC) program and the Job Opportunities and Basic Skills (JOBS) training program. The new program aims to provide time-limited assistance to needy families and to reduce their dependence on government benefits by promoting job preparation, work, and familial responsibility through marriage. States have greater flexibility than under previous programs to specify who receives benefits, under what terms, and for how long, but they must submit reports demonstrating that their performance is in compliance with the provisions of the legislation and achieving desired outcomes.
The principal TANF performance standard relates to the work requirements for assisted families: a specified proportion of adult recipients must be engaged in work or allowable work-related activities for a minimum number of hours per week. For example, in 1998, 30 percent of all TANF families had to have an adult working at least 20 hours per week, with higher rates of work participation required of two-parent families. By 2002, 50 percent of all TANF families must have an adult working at least 30 hours per week. Evidence of substantial reductions in caseloads can substitute for achievement of the targeted work participation requirements. States must file quarterly reports to the federal government on these work participation rates.
TANF links both penalties and bonuses to the level of performance. States that do not meet the work participation requirements or other performance standards are subject to a reduction in their annual block grants. An initial penalty of 5 percent for noncompliance with work participation rates can be increased by 2 percent per year to a maximum of 21 percent for repeated noncompliance. States are, however, given the opportunity to develop a plan for achieving compliance before penalties are assessed. States also can compete for annual "high performance" bonuses intended to reward accomplishments in moving welfare recipients into jobs (Administration for Children and Families, 1998b). In the first year, states with the best performance on each of four measures of employment gains will be eligible for bonus awards.5 Because states can be rewarded based
on the quality of the work that recipients find as well as the proportion of recipients who find work, the bonuses provide those states having less vibrant labor markets with an incentive for improvement. The TANF program also includes provisions for annual bonuses to the states that are most successful in reducing rates of out-of-wedlock childbearing.
On the basis of its observations regarding health-related block grant programs, this panel urges careful monitoring of the measures used to assess performance under TANF to ensure that they produce useful information without promoting unintended adverse effects. Some observers have expressed concern that these measures do not provide an adequate picture of program outcomes in terms of potential changes in the health and well-being of children in families receiving assistance or of adults or children in families that leave the welfare rolls (National Research Council, 1998).6 National and state data systems may need to be modified to produce such information. The panel also notes the limited opportunity states appear to have had to influence the performance criteria on which penalties are based. Greater collaboration is evident in determining how the high-performance bonus funds will be awarded. DHHS is working with the National Governors' Association, the American Public Human Services Association (formerly the American Public Welfare Association), and state representatives to develop the measures and formula to be used for this purpose (Administration for Children and Families, 1998b).
State Developments in Performance-Based Budgeting
States, like the federal government, are looking to revamp their program management process to better ensure desired outcomes for their citizens (Zelio, 1997). Current state-level performance monitoring and budgeting initiatives are the latest in a series of efforts to increase the responsiveness of state executive agencies to the electorate and the legislature. These initiatives seek to move beyond line-item budgeting, with its focus on detailed categories of expenses and resultant micromanagement of complex organizations, to an emphasis on program outcomes. Such efforts are generally driven by management-oriented state legislatures whose members believe that the implementation of improved management controls within state government systems will lead to more effective government overall.
A 1996 study found that 45 states are using performance measures in various ways (Florida Office of Program Policy Analysis and Government Accountability, 1997). There are 6 states using performance information as a budget decision tool, and another 9 are in the process of implementing such a system. Another
recent review of performance-based budgeting found that 7 states have introduced links between performance and financial or management incentives (e.g., financial rewards for agencies or individual employees, increased flexibility in use of funds); 2 of these states include disincentives such as increased oversight and reporting requirements (Melkers and Willoughby, 1998). States that engage in performance budgeting are actively restructuring their budget documents, reordering organizations, and changing organizational missions to align with policy responsibilities. In some instances, the organizations and suborganizations are realigned to be consistent with program objectives so that policy responsibilities are located within a single organization.
Oregon's strategic planning effort illustrates the use of a participatory statewide approach to planning and setting performance goals (Oregon Progress Board, 1997). In a process that was initiated by the governor in 1989, a strategic plan for the state, not just state government, was developed with input from the public and private sectors, including the general public. A set of benchmarks7 was chosen to translate the goals of the strategic plan into measurable objectives on such matters as health, education, employment, and the environment. For example, current health-related benchmarks include measures such as the percentage of adults who do not currently smoke tobacco; the percentage of eighth grade students who used alcohol in the previous month; and the percentage of Oregonians with a lasting developmental, mental, and/or physical disability who work. These benchmarks have been used by the legislature and state agencies in setting program and budget priorities for which specific performance measures are developed. However, it may be difficult to relate a benchmark based on a summary measure (e.g., years of potential life lost before age 70) to specific program activities or funding needs.
A recent review of the state's strategic plan (Oregon Progress Board, 1997) resulted in several recommendations that may be relevant for other performance measurement activities. Among these recommendations was identifying the relationships among benchmarks (e.g., teen pregnancy and child poverty). In addition, the system should use benchmarks for which reliable data are regularly available at a reasonable cost. The targets selected for benchmarks should also reflect realistic, evidence-based expectations of achievable performance; for example, limitations in current understanding of the factors that affect birth weight make it unreasonable to set a target of reducing the number of low-weight births by 50 percent. Moreover, as responsibility for implementing programs is transferred to the community level, the development of accurate and timely local-level data
becomes a priority. Another recommendation was to reduce the number of benchmarks from 259 to about 100, as the larger number of measures had proven difficult to track and prioritize. This panel notes, however, that a reduction in the number of measures involves a trade-off since it may lead to reduced visibility of some concerns within important specific areas (e.g., health).
Today's state-level performance monitoring and budgeting efforts vary in their focus, with their approach depending on which of three overall purposes they serve (Florida Office of Program Policy Analysis and Government Accountability, 1997):
- Guide management and administration. This purpose is served by an approach, similar to that of GPRA, that relies on the stakeholders for each agency and its subagencies developing a long-range plan and defining outcome and other performance measures. These formalized measures are used to guide the management and administration of the organization. Although the measures may be shared externally with the legislative body or the public, their primary purpose is to help agencies focus on a particular set of goals.
- Inform the budget process. States emphasizing this purpose concentrate their efforts on explaining the focus of their program and its achievements to the legislature and the public. The information provided is highly descriptive and includes details on capacity, resources, and expenditures. It allows legislators to make policy decisions in a larger context and consider the functions of all sectors of government. There are no direct financial or statutory incentives or disincentives under this approach.
- Provide a basis for resource allocation. States with this focus hope that performance budgeting systems will provide the major rationale for allocation of funds and make it possible to set measurable objectives. An attempt is made to report on past performance and shift the focus from line budgets to desired outcomes. In exchange for accountability, these states hope to offer executive agencies flexibility in management as a way of rewarding achievement. While some reporting is reduced or eliminated, the approach incorporates periodic program-specific evaluations that are supported by independent scientific verification of performance to validate accountability.
State experiences with performance-based budgeting suggest several lessons (U.S. General Accounting Office, 1994; Florida Office of Program Policy Analysis and Government Accountability, 1997). States have found it important to involve a broad range of stakeholders in the strategic planning process to achieve consensus on program goals and measures. Legislative and executive leadership are needed to ensure continuity of objectives over time and continued availability of the resources necessary to produce performance measurement reports. There are major challenges involved in designing performance monitoring systems that can clearly define governmental responsibilities and are meaningful to decision
makers. Furthermore, despite their commitment to performance-based management, state government personnel generally need more training in the development and use of performance measures. Information systems are recognized as necessary components of a performance-based management system, but they are frequently inadequate to generate the needed data on outcomes, program processes, and strategy-specific costs.
Health Care Performance Measurement in the Private Sector
Until fairly recently, performance-based accountability for health care outcomes has operated primarily on a case-by-case basis through malpractice claims and quality assurance programs, reflecting an assessment of the care provided by individual clinicians or hospitals to individual patients. More recently, quality improvement and performance measurement programs have altered this accountability framework by introducing continuous monitoring of the processes and outcomes of care for populations of patients. As under the performance-based budgeting approaches described above, clinical performance information provides management tools that can be used to promote improvements in health care.
Some of the best-known recent efforts to develop performance measurement systems in health care have been led by employer groups, credentialing organizations, health maintenance organizations, hospitals, and private consultants. Among the leading private-sector efforts are those by the National Committee for Quality Assurance (NCQA), the Foundation for Accountability (FACCT), and the Joint Commission on Accreditation of Healthcare Organizations (JCAHO), which are described in more detail below. In addition, the American Medical Association (1998) has introduced an accreditation program for individual physicians that will include standardized measures of clinical performance and patient care results.
There is an increasing degree of collaboration among these groups in the development of clinical performance measures and performance measurement systems. Moreover, as a growing proportion of Medicare and Medicaid services are provided by private-sector health plans, there is increasing public-private collaboration in the further development of some of these performance measurement systems. The federal government (U.S. Department of Health and Human Services, 1998) has announced plans to implement the recommendation of the President's Advisory Commission on Consumer Protection and Quality in the Health Care Industry (1998) to establish a Forum for Health Care Quality Measurement and Reporting that will work with private-sector groups to develop a core set of measures and standards for measurement. The American Medical Accreditation Program of the American Medical Association, JCAHO, and NCQA have established a Performance Measurement Coordinating Council to coordinate their performance measurement activities and through which they anticipate working with the newly proposed forum (Joint Commission on
Accreditation of Healthcare Organizations, 1998b). In a more targeted collaboration, the Diabetes Quality Improvement Project has brought together FACCT and NCQA, plus the American Academy of Family Physicians, the American College of Physicians, the American Diabetes Association, the Health Care Financing Administration, and the Veterans Health Administration, to develop a set of diabetes-related performance measures suitable for use nationally (Diabetes Quality Improvement Project, 1998).
National Committee for Quality Assurance
One of the most prominent performance measurement tools in health care is NCQA's Health Plan Employer Data and Information Set (HEDIS), a set of standardized measures for comparing the quality of care provided by participating health maintenance organizations (National Committee for Quality Assurance, 1997a). Originally developed to inform employers purchasing health services for their employees, HEDIS has evolved to address consumer information needs as well. It now includes measures specifically for the Medicare and Medicaid populations, as well as the commercially insured. Health plan reports are filed with NCQA, which in 1997 began publishing an annual summary (National Committee for Quality Assurance, 1997e).
HEDIS 3.0, the most recent version, includes 71 measures that health plans are required to use and 32 other measures (a "testing set") that are undergoing further evaluation and refinement. Each measure has a standard definition and technical specifications for data collection and calculation. For the measures based on data to be obtained through a member satisfaction survey, a standardized survey instrument has been developed (National Committee for Quality Assurance, 1997b). The HEDIS 3.0 measures cover the following domains of performance: effectiveness of care, access/availability of care, satisfaction with the experience of care, health plan stability, use of services, cost of care, informed choice, and health plan descriptive information.
An ongoing review and development process has been established to support the continued evolution of HEDIS measures and the overall HEDIS system. The Committee on Performance Measurement, which oversaw the development of HEDIS 3.0, will continue to guide the review of current measures, the identification of measures to be retired, the testing of new measures, and a research agenda to support the development of new measures and overall improvements in performance measurement. Measurement advisory panels will provide additional expertise for work in specific areas (e.g., behavioral health, cardiovascular disease, women's health).
HEDIS has become a widely recognized set of performance measures for assessing health care services provided by health maintenance organizations, but some have found it too limited in certain areas. In particular, the limited number of measures on mental health and substance abuse services has led to efforts by
others to develop suitable measures for managed behavioral health services (e.g., American Managed Behavioral Healthcare Association, 1995; J. Dilonardo, Substance Abuse and Mental Health Services Administration, personal communication, 1998).
In an initial test of the feasibility of HEDIS, the Report Card Pilot Project provided useful lessons that were reflected in the development of HEDIS 3.0 (Spoeri and Ullman, 1997) and are relevant to the work of this panel. Specifically, the pilot project revealed the need to adopt a broad set of measurement domains and to field test measures before formal adoption. It also demonstrated the variation in the organization and operation of health plan information systems and the need for greater standardization to produce comparable data across plans. Clinical information systems were generally found to be weaker than those for administrative and financial data. External data audits were valuable in identifying errors and inconsistencies in data systems and in the specifications and processes used to calculate measures. The documentation for HEDIS 3.0 includes a set of audit standards (National Committee for Quality Assurance, 1997d) and a report specifically on the development of information systems that can support performance measurement using HEDIS (National Committee for Quality Assurance, 1997c). A continuing area of concern is the need for risk adjustment of HEDIS measures. Although this need has been recognized, suitable risk adjustment techniques for use across plans have not yet been developed.
Foundation for Accountability
FACCT was created in 1995 in response to a desire by consumer groups and purchasers of health care services for a more effective means of bringing their perspectives to bear on the assessment of health care quality (Foundation for Accountability, 1998a). Working with consumer focus groups and experts, FACCT has developed sets of measures for use in assessing care for adult asthma, alcohol misuse, breast cancer, diabetes, and major depressive disorder (Foundation for Accountability, 1998b). In terms of the panel's framework, these sets include measures of process, risk status, and outcomes, including measures of satisfaction with care for the specific condition. FACCT has also developed a set of measures that focuses on smoking as a health risk factor. Two other sets address general health status and overall consumer satisfaction with services and care (e.g., getting needed services, choice of providers). Under development are measurement sets for coronary artery disease, end-of-life care, HIV/AIDS, and pediatric care. The measures adopted by FACCT are field tested by health plans and group practices as part of the development process.
FACCT has placed special emphasis on the consumer perspective and seeks to measure elements of health care quality that are important to consumers. In recent work with the Health Care Financing Administration, FACCT (1997) developed a framework intended to communicate health care performance infor-
mation (e.g., measures from FACCT and HEDIS) to Medicare beneficiaries in an effective manner.8 The project also explored conceptual and technical issues involved in constructing summary performance scores for health plans or health care providers.
Joint Commission on Accreditation of Healthcare Organizations
JCAHO has long served as one of the principal accrediting bodies for health care facilities. Its accreditation programs now include hospitals, home care agencies, long-term care facilities, behavioral health services, ambulatory health care providers, laboratories, and health care networks. Efforts over the past few years to integrate clinical performance measurement into JCAHO's accreditation process resulted in the Oryx initiative, which began in 1997 (Joint Commission on Accreditation of Healthcare Organizations, 1998c). Included in the Oryx program are hospitals, long-term care organizations, health care networks and health plans, home care organizations, and behavioral health care organizations. In the past, the accreditation process has been based on evidence of compliance with JCAHO standards covering such matters as staff credentials, equipment, and policies (Joint Commission on Accreditation of Healthcare Organizations, 1998a). In the panel's performance measurement framework, these standards could be viewed as focusing primarily on capacity (i.e., inputs to health care services), rather than on processes or outcomes of care. The addition of performance measures is seen as a way for the accreditation process to stimulate and contribute to quality improvement efforts.
The Oryx program will allow health care organizations to meet their performance measurement requirements through the use of a variety of measurement systems. For hospitals and long-term care facilities, JCAHO has approved more than 200 measurement systems operated by a variety of organizations. These include JCAHO's own Indicator Measurement System, which offers a set of performance measures focused on specific areas of patient care (e.g., obstetrics, trauma, oncology). Measures for health care networks have been selected from measure sets developed by JCAHO, FACCT, NCQA, the University of Colorado Health Sciences Center, and the University of Wisconsin (Madison). Health care organizations will report their performance data through the organizations that manage the specific measurement systems they adopt, not directly to JCAHO.
To maintain their JCAHO accreditation, health care organizations must report on a specified minimum number of measures selected from approved measure-
ment systems. For example, hospitals and long-term care organizations must initially report on at least 2 clinical measures that together are relevant to at least 20 percent of their patient population, or they must report on 5 measures. Health care networks must initially report on 10 measures. Plans call for increasing the required number of measures and patient population coverage. Separate reporting requirements are being developed for each accreditation program.
An advisory council has been established to provide a continuing review of the measurement systems included in the Oryx program. This group will also help select a set of core measures for each accreditation program. Review of candidate measures for use by hospitals is expected to begin in late 1998. Recognizing that selection of a measurement system and use of specific performance measures will be unfamiliar tasks for some of the participating organizations, JCAHO has developed a guidebook and other resources to help organizations evaluate and select a measurement system that will meet their needs.
Lessons for Publicly Funded Health Programs
The evolution of performance measurement in health care in the private sector offers lessons to those developing performance measures for publicly funded health programs. One key lesson is that performance measurement requires a continuing effort to select and improve measures and the measurement process. The quality and usefulness of the performance data being produced by health care organizations continue to improve, but conceptual and technical challenges remain (see, e.g., Eddy, 1998). The individualized performance ''report cards" developed in the past by some health plans lack the comparability across plans and providers that might be achieved by the larger-scale performance measurement programs, such as those of NCQA, FACCT, and JCAHO. These latter programs rely on more standardized sets of measures and guidelines for collecting relevant data using standard methodologies.
The activities of these nongovernmental groups are an important resource for performance measurement for the publicly funded health-related programs that the panel is addressing. The work done by these groups to identify suitable measures for clinical care can inform the selection of measures for related aspects of public programs. Likewise, the experience these groups are gaining in developing measurement standards and information system tools to support performance measurement in a health services context may help guide related efforts in the public sector.
Although the concept of performance measurement is hardly new and the use of performance indicators has been attempted episodically in various programs, the widespread use of such indicators in federal programs as contemplated by
GPRA is a new and significant requirement that is also emerging among state and local governments. Similarly, the increasingly widespread use of HEDIS and other performance measurement systems in health care is evidence of changing attitudes and expectations regarding accountability and management in the private sector.
Early experience with these vastly expanded requirements for accountability suggests that the new approaches offer many attractive features, but successful implementation will require substantial and continuing efforts to overcome several challenges. Conceiving and developing measures that capture performance accurately and comprehensively is often difficult and should be guided by special expertise; lack of data to support selected measures may necessitate the use of second-best choices; and multiple sets of measures may be required to satisfy the needs of varied users (e.g., program managers, funders, and the public). As more is learned about the use of performance measurement, progress is possible on all of these fronts. After reviewing performance measurement experience in other contexts, the panel concluded that several principles should guide current efforts to implement performance measurement for publicly funded health programs.
Link performance measurement to program goals. Performance measurement should be viewed as a tool that facilitates the monitoring and promotion of progress toward program goals, not as an end in itself. It must be based on a clear articulation of program goals and desired outcomes—health outcomes in the context of this report—and some sense of how those goals can be achieved. Outcome measures should reflect a program's goals, and measures of process and capacity should reflect the evidence on effective methods of achieving those outcomes. Performance measurement should be a constructive process that contributes to organizational capacity to meet program goals.
Adopt a "market basket" approach. A performance measurement system should promote the development of recognized sets of measures with agreed-upon definitions from which program participants (e.g., states or communities) should be expected to select specific measures that reflect the program priorities and strategies they have adopted. Even though programs generally have a core set of goals and objectives that are applicable regardless of where the program is operating, they must respond to diverse needs and regional circumstances. This means that specific program priorities and the strategies adopted to achieve them are likely to vary across states and communities. Therefore, a single, mandated set of performance measures is not appropriate. However, an effort should be made to associate particular program goals and strategies with specific outcome, risk status, process, and capacity measures so that identical activities related to those goals and strategies can be monitored using the same measures. For example, a program to reduce teenage smoking might be expected to use a standard measure of smoking prevalence. The specific process and risk status measures adopted should reflect the choice of strategies for reducing the prevalence of teenage smoking (e.g., reducing access to cigarette vending machines, restrict-
ing tobacco advertising near schools). Ideally, each measure should be recognized as valid, reliable, and responsive to change.
Recognize differing needs for performance information. The content and number of useful performance measures should be expected to differ between a program's operating level and the policy and sponsorship level across the intergovernmental structure. Compared with other levels, the operating level is likely to require more measures, and measures that focus more on process than on outcome. A performance measurement system should recognize these differing needs, but aim to use measures that can be linked, conceptually or in practice, to provide a consistent assessment of performance across these different levels. This principle is consistent with the GAO (1996) recommendation regarding GPRA that "hierarchies" of performance goals and measures are needed to reflect differing roles and responsibilities at various organizational levels.
Ensure the feasibility of data collection and analysis. The most elegant performance measures are of little use without a feasible data system to support them. Considerations such as the quality of the available data and the cost of obtaining specific data elements may limit the choice of measures, particularly in the short run. In some cases, it may be necessary to use less desirable measures while enhancing existing data sources or building better data sets. The panel's first report (National Research Council, 1997) specifically noted that the lack of data comparable across states was a significant obstacle to identifying optimal performance measures for many program areas. Given the trade-offs involved, it is clearly important to consider data collection and analysis strategies as part of the development of performance measurement systems.
Assess the consequences of using performance measurement. Performance measurement may achieve the desired effect of improving outcomes, or it may inadvertently promote undesired effects. Measurement results could, for example, be misinterpreted. A state with rates of food-borne illness that are higher than those of other states could be viewed as having problems in food safety practices when, instead, the higher rates reflect a more effective surveillance system. Another undesirable effect might be neglect of program areas or activities that are not being measured. Prematurely high expectations for performance data or rapid adoption of rigid performance targets could undermine intended program goals. For example, program practices might be manipulated to achieve "good" results, perhaps by avoiding populations that are difficult to serve rather than by implementing more effective services. The performance monitoring system, including individual performance measures, should be evaluated periodically to assess the consequences of its use. Such evaluation would help ensure that the system's goals were being met and decrease the likelihood of manipulation or inadvertent adverse effects, such as reduced services to groups that may be likely to have poor outcomes.
Adopt a developmental approach. The development of a successful performance measurement system should be viewed as an activity that continues to
evolve over time. Furthermore, because performance measurement is a new and largely unfamiliar policy mechanism, it should be tested in the contexts of goal setting, progress monitoring, and signaling of progress or problems before being used for resource allocation or regulatory purposes. The panel advocates starting with a comprehensive vision for a performance measurement system that is implemented in manageable phases, during which the participants learn and the system grows. There must be a firm commitment to ongoing research to develop new and better measures, relate these measures to program actions, and improve the performance measurement system. Research and evaluation studies must be done to test the effectiveness of performance measurement as a tool for improving health outcomes and program management.
For health programs, measures should be refined or replaced as understanding of the linkages between health outcomes and program activities ("processes") improves and as better sources of data are developed. Moreover, program priorities can be expected to change over time, necessitating the identification and testing of new performance measures. Sustained investments are needed in improvements to data systems, as well as in training and technical assistance to ensure that program and policy staff develop the necessary skills and expertise. With time and experience, performance measurement may prove to be an effective basis for allocation of resources or assessment of regulatory benchmarks, but it must always be used prudently, with an understanding of both its strengths and its limitations.
Administration for Children and Families 1998a. Temporary Assistance for Needy Families (TANF). Fact sheet. February 13, 1998. U.S. Department of Health and Human Services. http://www.acf.dhhs.gov/programs/opa/facts/tanf.htm (August 4, 1998).
1998b. Formula for Awarding the First High Performance Bonus in Fiscal Year (FY) 1999. Memorandum to state agencies administering the Temporary Assistance for Needy Families (TANF) program and other interested parties. March 17, 1998. U.S. Department of Health and Human Services. http://www.acf.dhhs.gov/news/welfare/highperf.htm (August 4, 1998).
American Academy of Pediatrics 1992. The medical home. Pediatrics 90:774.
American Managed Behavioral Healthcare Association 1995. Performance Measures for Managed Behavioral Healthcare Programs (PERMS). Washington, D.C.: American Managed Behavioral Healthcare Association.
American Medical Association 1998. American Medical Accreditation Program. http://www.ama-assn.org/med-sci/amapsite/index.htm (July 31, 1998).
Diabetes Quality Improvement Project 1998. Initial Measure Set (Final Version). August 14, 1998. http://www.facct.org/DQIP-fnl.html (November 25, 1998).
Eddy, D.M. 1998. Performance measurement: Problems and solutions. Health Affairs 17(4):7–25.
Florida Office of Program Policy Analysis and Government Accountability 1997. Performance-Based Program Budgeting in Context: History and Comparison. Report No. 96-77A (April). Tallahassee: Florida Legislature. Available at http://www.oppaga.state.fl.us/budget/reviews.html/ (April 8, 1998 ).
Food and Drug Administration, U.S. Department of Health and Human Services 1996. Regulations restricting the sale and distribution of cigarettes and smokeless tobacco to protect children and adolescents. Final rule. August 28. Federal Register 61(168):44395–44445.
Foundation for Accountability 1997. Reporting Quality Information to Consumers. Portland, Ore.: Foundation for Accountability.
1998a. About FACCT. http://www.facct.org/about.html (April 15, 1998).
1998b. Measuring Quality. http://www.facct.org/measures.html (December 28, 1998).
Institute of Medicine 1997. Improving Health in the Community: A Role for Performance Monitoring. J.S. Durch, L.A. Bailey, and M.A. Stoto, eds. Committee on Using Performance Monitoring to Improve Community Health. Washington, D.C.: National Academy Press.
Joint Commission on Accreditation of Healthcare Organizations 1998a. Accreditation Information. http://www.jcaho.org/acr_info/acr_std.htm (August 16, 1998).
1998b. Nation's Three Leading Health Care Quality Oversight Bodies to Coordinate Measurement Activities. Press release. May 19, 1998. http://www.jcaho.org/news/nb.htm (June 5, 1998).
1998c. Oryx Fact Sheet for Health Care Organizations. http://www.jcaho.org/perfmeas/oryx/sidebar1.htm (July 24, 1998).
Maternal and Child Health Bureau 1997. Guidance and Forms for the Title V Application/Annual Report. Maternal and Child Health Services Title V Block Grant Program. December 22, 1997. Rockville, Md.: U.S. Department of Health and Human Services, Health Resources and Services Administration.
1998a. Office of State and Community Health. U.S. Department of Health and Human Services, Health Resources and Services Administration. http://www.hhs.gov:80/hrsa/mchb/osch.htm (December 30, 1998).
1998b. State Systems Development Initiative (SSDI) Grant Application Guidance for FY98. U.S. Department of Health and Human Services, Health Resources and Services Administration. http://www.hhs.gov:80/hrsa/mchb/guidance.htm (June 4, 1998).
Melkers, J., and K. Willoughby 1998. The state of the states: Performance-based budgeting requirements in 47 out of 50. Public Administration Review 58(1):66–73.
MHSIP Task Force on a Consumer-Oriented Mental Health Report Card 1996. The MHSIP Consumer-Oriented Mental Health Report Card. Final report of the Mental Health Statistics Improvement Program (MHSIP) Task Force on a Consumer-Oriented Mental Health Report Card. Rockville, Md.: U.S. Department of Health and Human Services, Substance Abuse and Mental Health Services Administration, Center for Mental Health Services.
National Academy of Public Administration 1994. Toward Useful Performance Measurement: Lessons Learned from Initial Pilot Performance Plans Prepared Under the Government Performance and Results Act. Washington, D.C.: National Academy of Public Administration.
1998. Effective Implementation of the Government Performance and Results Act. Washington, D.C.: National Academy of Public Administration.
National Committee for Quality Assurance 1997a. HEDIS 3.0/1998. Washington, D.C.: National Committee for Quality Assurance.
1997b. HEDIS 3.0/1998. Vol. 3, Member Satisfaction Survey. Washington, D.C.: National Committee for Quality Assurance.
1997c. HEDIS 3.0/1998. Vol. 4, A Road Map for Information Systems: Evolving Systems to Support Performance Measurement. Washington, D.C.: National Committee for Quality Assurance.
1997d. HEDIS 3.0/1998. Vol. 5, HEDIS Compliance Audit Standards and Guidelines. Washington, D.C.: National Committee for Quality Assurance.
1997e. The State of Managed Care Quality. Washington, D.C.: National Committee for Quality Assurance. http://www.ncqa.org/news/report.htm (April 7, 1998).
National Performance Review 1993. From Red Tape to Results: Creating a Government That Works Better and Costs Less. Washington, D.C. Available at http://www.npr.gov/library/nprrpt/annrpt/redtpe93/index.html (August 27, 1998).
1997. Serving the American Public: Best Practices in Performance Measurement. Washington, D.C. Available at http://www.npr.gov/library/review.html (July 23, 1998).
National Research Council 1997. Assessment of Performance Measures for Public Health, Substance Abuse, and Mental Health. E.B. Perrin and J.J. Koshel, eds. Panel on Performance Measures and Data for Public Health Performance Partnership Grants, Committee on National Statistics. Washington, D.C.: National Academy Press.
1998. Providing National Statistics on Health and Social Welfare Programs in an Era of Change. Summary of a workshop. C.F. Citro, C.F. Manski, and J. Pepper, eds. Committee on National Statistics. Washington, D.C.: National Academy Press.
Oregon Progress Board 1997. Oregon Shines II: Updating Oregon's Strategic Plan. Salem, Ore.: Oregon Progress Board.
Osborne, D., and T. Gaebler 1992. Reinventing Government: How the Entrepreneurial Spirit Is Transforming the Public Sector. Reading, Mass.: Addison-Wesley.
President's Advisory Commission on Consumer Protection and Quality in the Health Care Industry 1998. Quality First: Better Health Care for All Americans. Washington, D.C.: President's Advisory Commission on Consumer Protection and Quality in the Health Care Industry. Available at http://www.hcqualitycommission.gov/final/ (July 27, 1998).
Rigotti, N.A., J.R. DiFranza, Y.C. Chang, T. Tisdale, B. Kemp, and D.E. Singer 1997. The effects of enforcing tobacco-sales laws on adolescents' access to tobacco and smoking behavior. New England Journal of Medicine 337:1044–1051.
Spoeri, R.K., and R. Ullman 1997. Measuring and reporting managed care performance: Lessons learned and new initiatives. Annals of Internal Medicine 127:726–732.
Substance Abuse and Mental Health Services Administration 1998. SAMHSA's Tobacco Activities: Implementing the Synar Requirements Under the Substance Abuse Prevention and Treatment Block Grant. U.S. Department of Health and Human Services. http://www.samhsa.gov/csap/synar/sydex.htm (May 18, 1998).
U.S. Department of Health and Human Services 1991. Healthy People 2000: National Health Promotion and Disease Prevention Objectives. DHHS Pub. No. (PHS) 91-50212. Washington, D.C.: Office of the Assistant Secretary for Health.
1998. President Endorses Quality Commission's Final Report and Issues Executive Memorandum to Improve Health Care Quality. White House fact sheet. March 13, 1998. http://www.hhs.gov/news/press/1998pres/980313a.html (August 4, 1998).
U.S. General Accounting Office 1994. Managing for Results: State Experiences Provide Insights for Federal Management Reforms. GAO/GGD-95-22. Washington, D.C.: U.S. Government Printing Office.
1996. Executive Guide: Effectively Implementing the Government Performance and Results Act. GAO/GGD-96-118. Washington, D.C.: U.S. Government Printing Office.
1997a. The Government Performance and Results Act: 1997 Governmentwide Implementation Will Be Uneven. GAO/GGD-97-109. Washington, D.C.: U.S. Government Printing Office.
1997b. Managing for Results: Analytic Challenges in Measuring Performance. GAO/HEHS/GGD-97-138. Washington, D.C.: U.S. Government Printing Office.
1997c. Performance Budgeting: Past Initiatives Offer Insights for GPRA Implementation. GAO/AIMD-97-46. Washington, D.C.: U.S. Government Printing Office.
Wholey, J.S. 1983. Evaluation and Effective Public Management. Boston: Little, Brown.
Wholey, J.S., and H.P. Hatry 1992. The case for performance monitoring. Public Administration Review 52(6):604–610.
Zelio, J. 1997. Update on performance budgeting. LegisBrief 5(37). National Conference of State Legislatures.