Building Stronger Information Capabilities
Summary of Chapter Recommendations
As discussed in Chapter 4, the Committee recommends that by 2005, all health care providers participating in government health care programs be capable of electronically gathering and reporting the subset of patient-level data needed to calculate the core sets of performance measures. Full implementation of this recommendation will depend in part on the development of a more sophisticated clinical information technology infrastructure throughout the health care system.
RECOMMENDATION 5: The federal government should take steps immediately to encourage and facilitate the development of the information technology infrastructure that is critical to health care quality and safety enhancement, as well as to many of the nation’s other priorities, such as bioterrorism surveillance, public health, and research. Specifically:
a. Congress should consider potential options to facilitate rapid development of a national health information infrastructure, including tax credits, subsidized loans, and grants.
b. Government health care programs that deliver services through the private sector—Medicare, Medicaid, the State Children’s Health Insurance Program (SCHIP), and a portion of Department of Defense (DOD) TRICARE—should adopt both market-based and regulatory options to encourage investment in information technology. Such options might include enhanced or more rapid payments to providers capable of submitting computerized clinical data, a requirement for certain information technology capabilities as a condition of participation, and direct grants.
c. The Veterans Health Administration (VHA), DOD TRICARE, and the Indian Health Service (IHS) should continue implementing clinical and administrative information systems that enable the retrieval of clinical information across their programs and can communicate directly with each other. Whenever possible, the software and intellectual property developed by these three government programs should rely on Web-based language and architecture and be made available in the public domain.
Although this report focuses on the federal government’s role, the committee believes private-sector purchasers should also contribute to building the country’s health information infrastructure by providing financial and other incentives.
Comparative quality data should be available in the public domain for use by many stakeholders. There are numerous potential uses of such data. Public- and private-sector oversight organizations might rely on performance measurement data to develop benchmarks for the clinical practice patterns of providers and goals for stimulating improvements in clinical care. The data would also be useful to states and communities as a way of monitoring the progress of community-based efforts in meeting public health goals (e.g., reducing obesity and use of tobacco). Professional groups, including board certification entities and others involved in continuing education, would be likely to use the data to provide ongoing feedback to providers and identify best practices. Group purchasers and consumers might use the quality data to assist in the selection of providers and health plans.
RECOMMENDATION 6: Starting in FY 2008, each government health care program should make comparative quality reports and data available in the public domain. The programs should provide for access to these reports and data in ways that meet the needs of various users, provided that patient privacy is protected.
Pooling of performance data across all six major government programs would enable more accurate performance assessment for those receiving services through multiple programs. It would also permit benchmarking of performance levels across programs.
RECOMMENDATION 7: The government health care programs, working with the Agency for Healthcare Research and Quality (AHRQ), should establish a mechanism for pooling performance measurement data across programs in a data repository. Contributions of data from private-sector insurance programs should be encouraged provided such data meet certain standards for validity and reliability. Consumers, health care professionals, planners, purchasers, regulators, public health officials, researchers, and others should be afforded access to the repository, provided that patient privacy is protected.
The committee is recommending a strategy for quality enhancement that relies on measurement and reporting of standardized performance measures across the government health care programs. Valid clinical performance measurement depends on the availability of clinical data (McGlynn and Brook, 2001).
Access to data remains problematic in a health care system that still depends largely on claims data, abstraction of data from paper records, and surveys to determine whether patients are receiving identified elements of care. The dependence on abstraction generally limits performance measurement to evaluation of entities with sufficient administrative infrastructure to develop the necessary data, such as hospitals, health plans, and large group practices, thereby excluding many small ambulatory care settings where a large proportion of care is delivered. Record abstraction is a labor-intensive process that usually occurs retrospectively rather than as an integral part of the clinical process, imposing a burden that prohibits more than intermittent review. While less costly than record abstraction, reliance on claims data may not provide the level of clinical detail required to track processes of care accurately (McIntyre et al., 2001; Schneider and Lieberman, 2001). For example, current claims data in many cases do not indicate whether complications in the course of hospitalization arose from preexisting comorbidities or adverse consequences of care. Moreover, claims data are available only for insured populations and are limited to billable services, thus constraining the aspects of care that can be evaluated.
Today’s data sources simply cannot support the strategy for quality enhancement proposed in this report. Indeed, there is broad consensus that the nation must develop a functional health care information technology infrastructure (Becher and Chassin, 2001; Eddy, 1998; Institute of Medicine, 2001; McGlynn and Brook, 2001; National Committee on Vital and Health Statistics, 2001; Schneider et al., 1999). Growing evidence supports the conclusion that automated clinical information and decision-support systems are critical to addressing the nation’s health care quality gap (Institute of Medicine, 2001). Computerized order entry and electronic medical records have been found to result in measurably improved care and better outcomes for patients (Bates et al., 1999; Birkmeyer et al., 2002; Webster, 2001). These results are particularly notable when electronic ordering triggers clinical decision-support information, for example, on antibiotic use (Christakis et al., 2001; Demakis et al., 2000; Rollman et al., 2001; Safran, 2001). Similar evidence suggests that these systems have the potential to reduce costs as well (Birkmeyer et al., 2002; Webster, 2001). In one study in which electronic order entry was accompanied by decision-support tools such as allergy and drug-interaction warnings, serious medication errors were demonstrated to decline by 86 percent (Bates et al.,
1999). Anecdotal reporting on the experience of individual hospitals confirms significant error reduction and savings in labor costs (Landro, 2002; Webster, 2001). Other experiments in the use of technology to improve outcomes and increase efficiency are ongoing. For example, an element of some of the Medicare Coordinated Care Demonstration Projects discussed in Chapter 3 is to evaluate the impact of electronic remote monitoring of patients to manage treatment (Department of Health and Human Services, 2001; Georgetown University Medical Center, 2002). While it may be too early to determine whether the observed cost savings completely offset or exceed the costs of setting up such systems, evidence on the reduction in harm to patients from computerized order entry is unambiguous and significant (Birkmeyer et al., 2002).
Standardized performance measure datasets containing patient-level information could be mined to learn many things and to support various strategies for quality improvement. Providers could use comparative quality data to benchmark their performance and share information on best practices. Groups such as the American Board of Medical Specialties and many of its member boards, which are already expanding practice oversight activities as an integral component of their recertification processes, may use the data as an input to decision-making (American Board of Medical Specialties, 2000). Government programs would be able to identify the levels of care received by different populations served by a program, such as rural and urban populations or those residing in different regions; they could then target strategies to address those disparities. Such datasets could also support the development of targeted regulatory strategies, such as reduced regulatory burden for providers that achieve quality goals or intensified participation in quality improvement initiatives for providers whose performance was determined to be substandard.
Uniform automated datasets also offer the opportunity for government programs to develop multiple formats for the presentation of performance data tailored to the needs of specific audiences, including providers, consumers, and community health care leaders (Hibbard et al., 2002). Reporting efforts for consumers should recognize the diversity of cultural, racial, and ethnic groups being served, including differences in languages and levels of health literacy. Quality reports for providers should be tailored to assist clinicians in identifying opportunities for improvement in their own practices. Efforts should also be made to provide physicians with information that can better inform their referrals of patients to specialists and hospitals.
As discussed in Chapter 4, providers and plans are faced with a multiplicity of measures from a variety of sources with which they have varying relationships, adding to the burden of a cumbersome collection pro-
cess. In addition, current performance measurement fails to capture how providers interact across settings and organizations in providing care to individuals (Paone, 2001). This weakness reflects underlying limitations in the ability of providers to communicate with each other regarding patient care or to have real-time access to information on concurrent treatment of individual patients by multiple providers. Tracking clinical performance requires an integrated health information framework (Schneider et al., 1999). That framework depends in turn on the development of computerized clinical data (Gawande and Bates, 2000; McGlynn and Brook, 2001; McIntyre et al., 2001; Schneider et al., 1999).
The remainder of this chapter reviews the current status of information technology development in the government health care programs, examines strategies for motivating the development of enhanced capabilities, and addresses the key issue of access to the resulting information.
THE STATUS OF INFORMATION TECHNOLOGY DEVELOPMENT
The integration of information technology into health care beyond administrative and billing transactions is a complex task. The design of an information technology system and the way in which its components are connected to and operate with each other is referred to as the system architecture. An adequate information technology infrastructure requires an architecture that links and distributes robust clinical information throughout the network while also meeting the information and technology needs of specific users. In addition, health care organizations must meet the growing interest among patients in online access to their health information and the technology applications that can assist them with distance care (Rundle, 2002). Moreover, such applications must be consistent with the privacy protections in the Health Insurance Portability and Accountability Act (HIPAA; Public Law 104-191).
Significant technical and financial barriers have impeded electronic infrastructure development in the private health care sector. Creation of an IT infrastructure requires capital investment and ongoing resources for system maintenance. Initial implementation of systems may also entail disruptions in practice, and temporary loss of practice revenues. In the current environment, incentives to providers to make the necessary investments are lacking, and this lack of financial incentives is compounded by technical barriers that cause many providers to question the value of the investment.
The development of robust infrastructures for information technology in the health care arena has been hampered by a lack of national standards for the coding and classification of clinical and other health care data and the transmission and sharing of such data (National Committee
on Vital and Health Statistics, 2001). Numerous efforts are underway to address this issue including: 1) the Consolidated Health Informatics Initiative, created under the auspices of the White House Office of Management and Budget in 2001 to facilitate the development of standards that would ensure compatible information technology systems across the government health programs (Office of Management and Budget, 2002); 2) the Markle Foundation’s Connecting for Health Initiative (2002); and 3) an IOM project on Patient Safety Data Standards. It is important that these initiatives move forward expeditiously to address the critical need for national data standards.
It should be noted that the establishment of national standardized performance measures by the federal government in collaboration with the private sector, as recommended in this report, will remove one major barrier to the development of clinical data standards. The lack of clarity and consistency in performance reporting requirements across public and private payers and other stakeholders currently complicates efforts to reach a broad-based consensus around the content and representation of clinical data elements. The forthcoming IOM report on patient safety data standards will be addressing this issue in greater detail.
There are differences in information technology infrastructures across the six major government programs. In general, the four government programs that pay for health care delivered through the private sector—Medicare, Medicaid, SCHIP, and a portion of the DOD TRICARE program—have limited ability to obtain computerized clinical data from providers, reflecting the low level of automation in this sector. By contrast, the government health care programs characterized by government ownership and operation of the direct care system—the programs of VHA and IHS, and the remainder of DOD TRICARE—have implemented more computerized clinical data systems and decision-support applications.
Government Programs That Deliver Care Through the Private Sector
As noted, Medicare, Medicaid, SCHIP, and a portion of TRICARE provide care to beneficiaries through the private sector. Accordingly, their clinical data capacity largely mirrors the limited applications of information technology in most private-sector health care delivery settings. These government programs primarily collect claims and encounter data from which some clinical data can be mined.
For Medicaid and Medicare managed care, the Health Plan Employer Data and Information Set (HEDIS) and other data can be obtained at the health plan level for specified conditions and quality improvement projects, and data can be gathered from medical chart abstraction and audits of Quality Improvement Organizations (QIOs) in the fee-for-ser-
vice (FFS) sector (MacTaggart, 2002). In addition, Medicaid maintains the Medicaid Management Information System, a hardware and software system that enables the states to collect claims and encounter data and submit them to the federal government in the form of the Medicaid Statistical Information Set (MSIS). These systems are designed to track utilization rather than provide clinical data for performance measurement (Friedman, 2002). MSIS provides patient-level data, but it does not include data on providers. Patient-level data available at the state level are not shared with CMS (Buchanan, 2002). CMS receives summary, aggregate reports from states. The capacity to gather computerized clinical data from the majority of clinicians in sufficient detail to enable performance measurement remains largely undeveloped.
Government Programs That Provide Direct Care
The largest programs that provide direct care—that of the VHA and that portion of TRICARE provided by DOD through its own facilities and infrastructure (the Military Health System)1—have developed systems for recording and extracting clinical data that stem from their adoption of the computer-based patient record. IHS has developed substantial automated clinical data capacity that complements medical chart abstraction entered electronically, instead of relying on a computer-based patient record.
Veterans Health Administration
VHA has one of the largest integrated health information systems in the United States. Its operating objective is to input data once that can be utilized throughout the network by different types of users on an authorized basis. This system enables electronic documentation of health data, real-time access to important clinical information at the point of care (e.g., radiological images, laboratory test results, clinical observations, and pharmacy orders), and linkages to facilitate administrative and financial processing. Other applications such as those for reporting adverse medical events represent spearheading efforts to use health information systems to improve patient safety. A technical description of the VHA system is provided in Appendix C.
At the heart of the VHA health information system is the Computerized Patient Record System (CPRS) which serves as a unifying platform
for the integration of all patient-oriented applications (e.g., administrative, clinical) across the network. The current CPRS is a Windows-type desktop program that displays all relevant patient data needed to support clinical decision making. It enables clinicians to enter, review, and continuously update all information (including pharmacy, laboratory, and radiology) related to any patient. The CPRS can also be accessed from the operating room and enables automatic generation of the postoperative report. To address privacy concerns, access to CPRS is limited to those authorized to perform various actions on specific clinical documents. The system depends on a legacy server with limited portability for other users. However, VHA is in the process of upgrading to a system that uses Web-based language. VHA’s information technology upgrade is expected to be completed by 2005 (Christopherson, 2002). The projected costs associated with the upgrade are approximately $100 million in 2002 and $150 million in 2003 (Christopherson, 2002).
In addition to provider-oriented applications of medical records for gathering clinical data, VHA has established the My Healthy Vet program, which provides veterans an online connection to their medical records. Participating veterans can obtain electronic copies of key portions of their electronic health records, add medical information in a “self-entered” section, and link to a health education library.
The Military Health System
The MHS provides information technology support to over 540 military facilities worldwide. A brief technical description of the MHS information systems and their applications is provided in Appendix C.
Like VHA, the MHS currently maintains a computerized patient record (CPR) for laboratory, radiology, and pharmacy information. By the end of 2002, it will launch a pilot of a fully electronic CPR that will establish an individual’s medical record from beginning to end of military service. The record will be linked to a Clinical Data Repository (CDR) that currently serves as a “clinical warehouse” for electronic laboratory, radiology, and pharmacy data, and for applications associated with the CPR (e.g., wellness alerts, provider prompts). Data in the CDR will support clinical research, wellness alerts, symptom surveillance, and population health improvement efforts.
The Theater Medical Information Program (TMIP) provides data for the clinical care of battlefield casualties and the management of military medical assets. TMIP functions on an independent temporary database system that is linked to a clinical data repository—the Composite Health Care System. During deployment, the relevant medical information in a patient’s electronic record (held in the CDR) is accessed through TMIP.
All clinical documentation related to local treatment during deployment is held in the temporary database. Upon the return of the force personnel, the new medical information is downloaded into the CDR and the patient’s CPR.
Scheduled for roll-out to all facilities in late 2002, the military’s e-health communications system, TRICARE Online, will provide information to patients on health conditions and interactive health tools, disease management and treatment compliance recommendations, TRICARE medical facilities and providers, and appointment scheduling. Patients will be able to create their own personal health care home page to store medical information and resources in a secure environment.
The various elements of the VHA and MHS systems are designed to integrate clinical care activities with quality enhancement and measurement. Accordingly, clinicians are not required to create or participate in separate processes to gather data, file reports, and address quality concerns. Rather, relevant information can be retrieved automatically for a variety of different purposes. The result is a system that is less burdensome and supports clinical care across multiple settings.
Indian Health Service
IHS has developed an automated system of patient-level clinical data for its outpatient facilities that is used to support care delivery and to provide the capacity to conduct performance measurement. Radiology images and results, laboratory tests, and prescription orders are entered electronically in a patient-specific field. Paper medical charts are routinely abstracted in the medical records department of the facility and added to the electronic records as the Patient Care Component (PCC), a process that necessarily entails redundant labor and delay in the electronic inclusion of clinical care data. The abstracted PCC data include date and time of visit, provider identification, vital statistics, diagnosis, treatment modalities, patient education efforts, and surgical and injury history. The electronic information system employs multiple clinical applications of the PCC, such as triggers to decision-support tools, summaries of the 10 most recent encounters, and graphed laboratory values. The system can produce data on performance measures and respond to aggregate queries. While the electronic system is integrated at the site of care, it is not fully integrated across the different sites within the IHS system. A subset of the PCC data is transmitted to the central data warehouse maintained by IHS—the National Patient Information Reporting System (Kihega, 2002).
Joint Information Technology Initiatives
In May 2002, as the result of a White House initiative in 1997–98 to better track the course of apparent service-connected disease processes following the Gulf War, VHA and DOD began operating the Federal Health Information Exchange (FHIE). FHIE allows each program to send designated clinical information, such as pharmacy and laboratory data to a common database and retrieve data submitted by the other program as needed (Christopherson, 2002). The second phase of the joint operation is expected to be completed in 2006, after DOD finishes instituting its electronic medical record system. This phase will result in implementation of electronic information systems that are compatible between VHA and DOD and that will be able to communicate directly with each other without having to go through a common database. In addition, it is anticipated that IHS and VHA will coordinate on clinical software development for future applications. Supporting the above efforts will be the previously mentioned Consolidated Health Informatics initiative to develop common clinical data standards under the leadership of the Centers for Medicare and Medicaid Services (CMS) (Christopherson, 2002). Such common standards would enhance the usefulness and dissemination of the VHA/DOD technology to other programs and the private sector.
The committee is recommending that each of the government health care programs implement a core set of standardized performance measures by 2005, and that the number of measures be steadily increased over the next 5 to 8 years (see Chapter 4). Provider reporting of data necessary to enable performance measurement is required by 2007. Although it may be possible in the short run for government programs that deliver care through the private sector to rely on medical record abstraction to meet this requirement, greater computerization of clinical information will be required over the long run to sustain performance measurement, apply it to a broader range of conditions, and decrease the associated administrative burden on providers. It is anticipated that each of the government programs will pursue different strategies for stimulating the development of enhanced information technology capabilities. The challenges clearly will be much greater for those programs that deliver care through the private sector. However, programs that provide direct care will also need to make some changes. For all programs, the assurance of substantial and effective privacy protections for patient-level data is essential to the support needed by both providers and patients to make the collection of data for performance measurement operational.
Fostering Information Technology Development in the Private Health Care Sector
For providers that currently rely on computers simply for billing and appointment scheduling, building a clinical data capacity will require both capital and training investments. It is the committee’s conclusion that motivating providers associated with government programs to undertake the changes necessary for quality enhancement will require incentives and assistance. A range of actions—from payment and contracting incentives to tax credits and direct grants to regulation—are available to generate change. The committee encourages each government health care program to evaluate these options, sponsor private/public collaboration on the best approaches to development, and select those most appropriate to its objectives.
Financial and Administrative Incentives
To offset the costs of the capital investment and training required to achieve greater levels of automation, higher payments could be offered to providers that can harvest and submit clinical data electronically according to standardized core sets of clinical performance measures. Alternatively, those that submit performance data electronically could receive more rapid electronic payment. In Medicaid and SCHIP, direct financial incentives could be instituted through a substantially enhanced match to states to make payments to providers that meet certain automation standards. Contractors that meet specified information technology capacity could also be eligible for bonuses or other financial rewards (Kaye and Bailit, 1999). In addition, the government health programs could identify regulatory or administrative requirements that could be waived for providers with specified electronic capabilities.
While the above approaches can be expected to entice some providers to meet new electronic standards, they still leave providers substantial discretion to maintain the status quo. Other means of fostering change may therefore be necessary.
Contracting and Regulation
To accelerate the adoption of clinical information systems, program contracts with providers could include standardized information technology specifications as a contract condition. Similar provisions included in Medicaid contracts with managed care organizations require specified administrative data capacity, quality improvement activities, and grievance and appeal procedures (Rosenbaum et al., 1998).
Consistent with the recommendations in Chapter 4, providers could be required as a condition of participation (COP) to make available in automated form, by specified dates, the clinical data needed for performance measurement, with standardized data elements, definitions, and terminology. The advantage of making specified levels of information technology capability a contract term or COP is that it would eliminate provider discretion to continue existing practices, apply to all providers equally, and ensure that all populations would benefit equally from quality enhancement activities. The disadvantage is that without increased payments, such a requirement could exact a disproportionate outlay from safety net providers with strained resources, resulting in unintended negative effects on access to care.
Alternatively, maintenance of automated data could be a condition of payment. As a practical matter, this approach would motivate providers to automate clinical data as quickly as a COP. However, it raises similar concerns about the effects on safety net providers and access to care. Accordingly, any regulatory contracting strategy to improve provider information technology capabilities should be accompanied by appropriate financial support for such providers.
Grants and Tax Credits
The committee envisions a health information infrastructure that enables transfer of the information necessary to measure care across settings, time, and programs to reflect the needs and care experiences of patients, rather than the silo functions of individual providers. Such an infrastructure implies a transformation in the care delivery process that requires national commitment. The Hill-Burton Act (Public Law 79-725) established a grant and loan program that subsidizes construction costs to increase hospital capacity, contributing over $6 billion to that effort in the private sector (Health Resources and Services Administration, 2000). A similar substantial grant program should be considered to assure the proliferation of an information technology infrastructure that can ultimately support clinical care and enable performance measurement as a seamless process. Such a program could be initiated with targeted demonstration projects testing the amount, structure, and effectiveness of the grants, as well as their applicability to different types of providers.
Each program will need to conduct its own analysis of the most effective strategies for motivating change among its participating providers while collaborating with the other programs to ensure complementary approaches. It is the committee’s expectation that a combination of approaches, such as higher payments to safety net or other providers and direct grants combined with information technology–related conditions
of participation, payment, and contracting, would achieve the highest level of information technology improvement in the shortest amount of time with the least effect on access. While the committee recognizes that the initial investment required is large, the benefits of preventing errors, improving care and health status, and reducing duplication of services that can accrue from real-time access to current clinical information will offset much of the cost and provide a substantial public good. Beyond quality improvement, a robust health information infrastructure is essential to other national priorities, such as the medical tracking and follow-up critical to identifying and combating bioterrorism. Support for development of an adequate clinical information technology infrastructure should be commensurate with its importance to domestic security.
Information Technology Development in Direct Care Programs
While the largest government providers, VHA and the MHS, have a significant record of accomplishment and ongoing commitment to innovation in their systems, implementation of the committee’s recommendations will require that they move rapidly to complete the standardization and compatibility efforts now in process, and to ensure that such standardization is amenable to Web-based applications and dissemination to the private sector. VHA is currently reconfiguring its information technology system to make data definitions conform to Web-based language. This reconfiguration should support implementation of the standardized core datasets needed for performance measurement. The MHS needs to complete the expansion of its system to all regions and develop strategies for including clinical data from care delivered through its external purchased network in its health information system.
In all the programs in which the government is the direct provider of care, investments should be made in information technology infrastructure appropriate to the needs of the programs. It was the direct government investments in information technology infrastructure that led to the VHA systems. This infrastructure is now regarded by the clinicians using it as indispensable to direct care, even though it requires new investment for essential updating. Proportionate investments should be made to ensure the development of compatible information technology systems in public health and community clinics and in IHS.
The ongoing collaborative efforts of VHA, the MHS, IHS, and CMS to develop uniform systems are supported by the programs’ discretionary funds and have received no specific financial support from Congress (Christopherson, 2002). As a result of its size, scope, and range of applications, however, information technology collaboration among the federal programs provides the foundation for development of an electronic infra-
structure with the strongest potential for dissemination. Accordingly, it is the committee’s conclusion that additional funding will likely be needed to ensure the full implementation of uniform, compatible federal health information technology that lends itself easily to private-sector applications in the spirit of technology transfer from the government.
As the history of the Internet illustrates, the proliferation of electronic capacity among large numbers of providers creates its own momentum, driving the expansion of usage among others not previously engaged (Gladwell, 2000). The surmountable financial, organizational, and inertia challenges to building an information technology infrastructure in the health care sector are apparent. Given the demonstrated need for effective quality enhancement activities, however. it is equally clear that the status quo is not acceptable.
ACCESS TO INFORMATION
It is the committee’s conclusion that improving public access to information on health care quality will increase the impetus for addressing safety and quality concerns and is an important component of a comprehensive strategy to achieve significant improvement in the coming decade. Improving consumers’ awareness of the variability in the quality of health care is a necessary prerequisite to engaging them in making choices based on quality. Public access to such information has the potential to drive consumers to select better care, while also giving providers incentives to improve care (Marshall et al., 2000), furnishing accrediting boards and certifying entities with additional information and tools to motivate improved clinical care, and facilitating community and public health planning.
Overview of Reporting Efforts
To date, public reporting efforts have focused primarily on health plans, and to a lesser degree, hospitals or particular surgical interventions (Schauffler and Mordavsky, 2001). Very limited comparative information has been released for medical groups or physicians.
Most health plan report cards include process of care measures (HEDIS), patient perceptions of care (CAHPS), and accreditation status (McGlynn and Adams, 2001; National Committee for Quality Assurance, 2002). Analyses of impact have consistently found that such report cards have little impact on consumer decision-making (Schauffler and Mordavsky, 2001). Many factors appear to contribute to the lack of impact (Hibbard et al., 2001; Hibbard, 1998; McGlynn and Adams, 2001; Schauffler and Mordavsky, 2001), including:
The decision of most relevance to consumers is the selection of a provider not a health plan, probably in part, because many consumers have very limited choice of health plans.
The performance measures do not reflect issues of importance to consumers, but rather, what it is easy to measure given existing administrative data sets.
The information presented is too complex for most consumers to understand.
There is too much information for most consumers to process and use.
The report cards are not produced by a trusted source.
Consumers were not aware of the existence of the report cards.
Report cards focusing on hospitals or procedures are even less in number and there is very limited evidence regarding impact. For example, there is evidence that report cards, in combination with other interventions, have stimulated specific clinical changes to improve care among the poorest-performing providers in cardiac surgery in New York (Chassin, 2002; Hannan et al., 1994, 1997).
Comparative quality reporting is a rapidly developing trend in both the public and private health care sectors, to a great extent in response to growing demand for information on the quality of care (California HealthCare Foundation, 2002). In addition to CMS’ publication of comparative information on nursing homes, dialysis centers, and health plans, business groups and health plans have begun making public comparative surveys of consumer satisfaction with provider groups. For example, the Pacific Business Group on Health, Pacificare, HealthNet, and Blue Cross of California each put out separate proprietary report cards on their participating provider groups, many of whom overlap between plans. The Health Resources and Services Administration (HRSA) sponsors a Website that compares kidney transplant outcomes by provider across the nation (Scientific Registry of Transplant Recipients, 2001). A growing number of states, including New York, Pennsylvania, and California, publish annually the comparative outcomes by provider of cardiac bypass surgery. In addition to cardiac bypass surgery reports, the Pennsylvania Health Care Cost Containment Council puts out comparative reports on hospital and health plan performance and maintains several interactive databases that users can access to generate their own quality reports (Pennsylvania Health Care Cost Containment Council, 2002). There are numerous hospital surveys including the Leapfrog Hospital Survey, the California Hospital Outcomes Program, and the Patients’ Evaluation of Performance in California. Many of these surveys capture experiences that represent important patient-centered dimensions of quality that may not
be reflected in more clinical performance measures such as respect for patient preferences, coordination of care, pain relief, and emotional support (California HealthCare Foundation, 2002).
While the need for research to improve the efficacy, accuracy, and salience of comparative reports is discussed in some detail in Chapter 6, the proliferation of comparative reporting provides a distinct opportunity for leadership from the government health programs. Specifically, the standardization of definitions, sampling techniques and other methodologies, and statistical analysis could improve significantly the consistency of survey findings and assure the apples-to-apples comparisons that are currently lacking in many of these efforts (McGlynn et al., 1999; Simon and Monroe, 2001). Such standardization also would reduce the burden on provider groups who must respond to multiple surveys and audits and deal with inconsistent ratings provided by diverse purchasers/plans (Simon and Monroe, 2001). The committee believes that collaboration between those private entities with experience with provider-level report cards and the government health programs would facilitate the necessary standardization to achieve reliability, burden reduction, and greater dissemination of information on quality of care.
In summary, public reporting initiatives are in an early stage of development. To date, the quality measures and reports that have been provided to consumers appear not to have captured their interest. Yet the evidence is sparse and mixed, thus it must be interpreted cautiously. Reporting efforts with sufficient clinical detail for providers are even fewer in number, but here there are some promising results. Accreditation and certification entities are actively engaged in the collection, and in some cases reporting, of comparative performance data, and the National Committee for Quality Assurance (NCQA), in particular, has been a leader in this area. Indeed, performance measurement has become an integral part of most leading private sector oversight processes. It seems likely that these groups would make use of richer comparative data as it becomes available, especially if they are involved as partners in the developmental efforts.
The IOM Committee has concluded, like an earlier IOM committee (Institute of Medicine, 2001: Chapter 3), that steps should be taken to make performance measurement data available to various stakeholders in ways that will be most useful. It is unclear the extent to which various stakeholders will use these data, because most reporting efforts to date have been poorly designed and executed and hampered by the absence of detailed clinical data to derive measures likely to be meaningful to various
users. Consequently, future reporting efforts should be carefully designed, pilot tested and evaluated and subject to continuous refinement.
The steady demand for comparative performance data by accrediting entities, group purchasers, health plans, state governments, and others is indicative of a keen interest in quality information. If the federal government does not share performance measurement data and information with private sector stakeholders, it is very likely that these groups will continue to impose their own reporting requirements on providers, thus contributing to administrative burden.
Comparative data could be made available through prepared reports that synthesize data according to subjects or themes, authorized queries of the database, or analysis of data displayed on Websites. The data available should enable users to determine the comparative performance of providers or groups of providers in a program, as well as the program’s performance in improving quality of care. For program operation purposes, comparative data should enable the establishment of valid baselines within programs and the assessment of improvement or deterioration in performance. In addition, comparative data analysis will enable programs to assess geographic and population-specific disparities in care, as well as program-wide patterns of deficiency. For example, such analysis could reveal racial disparities within programs or disparities in care between Medicare and Medicaid.
Accordingly, program-level data should be susceptible to analysis for a range of purposes by a variety of users. The programs should be able to receive, store, and organize the data into domains appropriate for different types of users, such as consumers, purchasers, providers, regulators and their contractors, the public health community, researchers, and policy makers. The program database should be structured to provide different levels of access to data depending on the decision needs of the user.
Once comparative clinical measures are available, they can be used as tools in the government’s multiple roles in its health programs—purchaser, regulator, and provider. To enhance the effectiveness of regulation in bringing specific benefits to the public, the performance data can aid regulators in identifying substandard performance and developing cross-program strategies for improving care. To improve market performance in achieving quality goals, comparative data can inform purchasers (including government purchasers) in the selection and payment of contractors based on clinical performance.
As a market tool for consumers, access to information on the comparative quality performance of different providers for the core sets of measures, according to consistent standards and methodologies, is essen-
tial. Comparative data appropriately presented have the potential to assist consumers in provider and plan selection, thereby creating market incentives for providers to improve care as well as channeling patients to higher quality providers as reflected by the measures. Similarly, access by providers (including government operated delivery systems) to comparative information on quality may enable them to better assess their own clinical environment and identify accepted processes for improving care.
The provision of comparative performance data to patients may also provide the opportunity to educate patients about the critical elements of their care. While the effect of performance measures on patients’ medical self-management has not been evaluated, familiarity with the core measures could potentially enable consumers to better understand the critical elements of their care and become more active participants in their own health care management (Greenfield et al., 1985). For example, familiarity with diabetes process measures could stimulate diabetic patients to request eye exams or track their level of blood sugar control. Finally, comparative performance data could augment existing public health mechanisms for tracking the incidence and prevalence of certain types of diseases and interventions.
Use of a Pooled Data Repository Across Programs
The government programs should explore mechanisms for pooling the performance data needed to evaluate and compare quality across populations and programs. Pooled data could support quality enhancement at both the micro and macro levels; pooled Diabetes Quality Improvement (DQIP) data, for example, can help identify geographic, provider-level, and program-specific variations in the quality of diabetes care.
Private entities could also participate in the data repository, as long as they satisfy safeguards to assure data validity and reliability. The ability afforded by such a pool to enable broader, more population-based comparisons gives private plans an incentive to participate both to improve provider selection and evaluate their own performance.
The Agency for Healthcare Research and Quality (AHRQ) is well positioned to work with participating programs in developing and managing a pooled data repository. In designing the repository, AHRQ should get consumer and other stakeholder input. AHRQ’s research orientation provides the technical and analytical expertise needed to assess the validity of data to develop reporting and data access strategies to meet the needs of various users. In establishing the repository, AHRQ will need to assure compliance with HIPAA requirements for patient privacy. It is anticipated that any patient-level data would be stripped of identifiers.
American Board of Medical Specialties. 2000. “ABMS Home Page.” Online. Available at http://www.abms.org/ [accessed Sept. 30, 2002].
Bates, D. W., J. M. Teich, J. Lee, D. Seger, G. J. Kuperman, N. Ma’Luf, D. Boyle, and L. Leape. 1999. The impact of computerized physician order entry on medication error prevention. J Am Med Inform Assoc 6 (4):313-21.
Becher, E. C., and M. R. Chassin. 2001. Improving quality, minimizing error: making it happen. Health Aff (Millwood) 20 (3):68-81.
Birkmeyer, C. M., J. Lee, D. W. Bates, and J. D. Birkmeyer. 2002. Will electronic order entry reduce health care costs? Eff Clin Pract 5:67-74.
Buchanan, R. (CMS). 3 May 2002. Personal communication to Jill Eden.
California HealthCare Foundation. 2002. “Quality Nears Tipping Point in California: Accountability Efforts Multiply.” Online. Available at http://www.chcf.org/topics/view.cfm?itemID=19764 [accessed Sept. 6, 2002].
Chassin, M. 2002. Achieving and sustaining improved quality: lessons from New York state and cardiac surgery. Health Aff (Millwood) 21 (4):40-51.
Christakis, D. A., F. J. Zimmerman, J. A. Wright, M. M. Garrison, F. P. Rivara, and R. L. Davis. 2001. A randomized controlled trial of point-of-care evidence to improve the antibiotic prescribing practices for otitis media in children. Pediatrics 107 (2):E15.
Christopherson, G. (VHA). 6 June 2002. Personal communication to Barbara Smith.
Demakis, J. G., C. Beauchamp, W. L. Cull, R. Denwood, S. A. Eisen, R. Lofgren, K. Nichol, J. Woolliscroft, and W. G. Henderson. 2000. Improving residents’ compliance with standards of ambulatory care: results from the VA Cooperative Study on Computerized Reminders. JAMA 284 (11):1411-16.
Department of Health and Human Services. 2001. Medicare Fact Sheet: Providing Coordinated Care to Improve Quality of Care for Chronically Ill Medicare Beneficiaries. Washington DC: U.S. Department of Health and Human Services.
Eddy, D. M. 1998. Performance measurement: problems and solutions. Health Aff (Millwood) 17 (4):7-25.
Friedman, R. (CMS). 14 May 2002. Personal communication to Barbara Smith.
Gawande, A. A., and D. W. Bates. 2000. The use of information technology in improving medical performance. Part I. Information systems for medical transactions. MedGenMed Feb. 7:E14.
Georgetown University Medical Center. 2002. “The Imaging Science and Information Systems (ISIS) Center.” Online. Available at http://www.imac.georgetown.edu/aboutisis/research.htm [accessed Sept. 4, 2002].
Gladwell, M. 2000. The Tipping Point: How Little Things Can Make a Big Difference. Boston: Little, Brown, and Co.
Greenfield, S., S. Kaplan, and J. E. J. Ware. 1985. Expanding patient involvement in care. Effects on patient outcomes. Ann Intern Med 102 (4):520-8.
Hannan, E., H. Kilburn, M. Racz, E. Shields, and M. Chassin. 1994. Improving the outcomes of coronary artery bypass surgery in New York State. JAMA 271 (10):761-6.
Hannan, E. L., A. L. Siu, D. Kumar, M. Racz, D. B. Pryor, and M. R. Chassin. 1997. Assessment of coronary artery bypass graft surgery performance in New York. Is there a bias against taking high-risk patients? Med Care 35 (1):49-56.
Health Resources and Services Administration. 2000. “The Hill-Burton Free Care Program.” Online. Available at http://www.hrsa.gov/osp/dfcr/about/aboutdiv.htm [accessed July 25, 2002].
Hibbard, J. H., E. Peters, P. Slovic, M. L. Finucane, and M. Tusler. 2001. Making health care quality reports easier to use. Jt Comm J Qual Improv 27 (11):591-604.
Hibbard, J. H., P. Slovic, E. Peters, and M. L. Finucane. 2002. Strategies for reporting health plan performance information to consumers: evidence from controlled studies. Health Serv Res 37 (2):291-313.
Hibbard, J. H. 1998. Use of outcome data by purchasers and consumer: new strategies and new dilemmas. Int J Qual Health Care 10 (6):503-08.
Institute of Medicine. 2001. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington DC: National Academy Press.
Kaye, N., and M. Bailit. 1999. Innovations in Payment Strategies to Improve Plan Performance. Portland, ME: National Academy for State Health Policy.
Kihega, A. (IHS). 15 May 2002. Personal communication to Barbara Smith.
Landro, L., Wall Street Journal Online. 2002. “FDA is urged to hasten efforts to require bar codes on drugs.” Online. Available at www.online.wsj.com-Health.
MacTaggart, P. (CMS). 13 May 2002. Personal communication to Barbara Smith.
Markle Foundation. 2002. “News and Reference: Connecting for Health initiative.” Online. Available at http://www.markle.org/news/_news_pressrelease_062102.stm [accessed Sept. 26, 2002].
Marshall, M., P. Shekelle, R. Brook, and S. Leatherman, Rand. 2000. “Dying to Know: Public Release of Information About Quality of Health Care.” Online. Available at http://www.rand.org/publications/MR/MR1255/ [accessed May 15, 2002].
McGlynn, E., and J. Adams. 2001. Public release of information on quality. Pp. 183-202. In Changing the U.S. Health Care System: Key Issues in Health Services Policy and Management. 2nd edition. R. Andersen, T. Rice, and G. Kominksi, eds. Jossey-Bass, Inc.
McGlynn, E. A., and R. H. Brook. 2001. Keeping quality on the policy agenda. Health Aff (Millwood) 20 (3):82-90.
McGlynn, E. A., E. A. Kerr, and S. M. Asch. 1999. New approach to assessing clinical quality of care for women: the QA Tool system. Womens Health Issues 9 (4):184-92.
McIntyre, D., L. Rogers, and E. J. Heier. 2001. Overview, history and objectives of performance measurement. Health Care Financ Rev 22 (3):7-21.
National Committee for Quality Assurance. 2002. “NCQA Report Cards.” Online. Available at http://hprc.ncqa.org/menu.asp [accessed May 6, 2002].
National Committee on Vital and Health Statistics. 2001. “Information for Health: A Strategy for Building the National Health Information Infrastructure.” Online. Available at http://ncvhs.hhs.gov/nhiilayo.pdf [accessed May 14, 2002].
Office of Management and Budget. 2002. “E-Government Strategy: Implementing the President’s Management Agenda for E-Government.” Online. Available at http://www.whitehouse.gov/omb/inforeg/egovstrategy.pdf [accessed Aug. 15, 2002].
Paone, D. 2001. Quality methods and measures: MMIP technical assistance paper no. 9 for RWJF, National Chronic Care Consortium.
Pennsylvania Health Care Cost Containment Council. 2002. “PHC4 Homepage.” Online. Available at http://www.phc4.org/ [accessed Oct. 18, 2002].
Rollman, B. L., B. H. Hanusa, T. Gilbert, H. J. Lowe, W. N. Kapoor, and H. C. Schulberg. 2001. The electronic medical record. A randomized trial of its impact on primary care physicians’ initial management of major depression. Arch Intern Med 161 (2):189-97.
Rosenbaum, S., B. M. Smith, P. Shin, M Zakheim, K Shaw, C Sonosky, and L. Respasch. 1998. Negotiating the New Health System: A Nationwide Study of Medicaid Manged Care Contracts, Vol 1. Washington DC: Center for Health Policy Research, The George Washington University Medical Center.
Rundle, R. L. 2002. “The Wall Street Journal Online: More Health Care Providers Offer Patient Records Online.” Online. Available at http://online.wsj.com [accessed July 12, 2002].
Safran, C. 2001. msJAMA. Electronic medical records: a decade of experience. JAMA 285 (13):1766.
Schauffler, H. H., and J. K. Mordavsky. 2001. Consumer reports in health care: do they make a difference. Annu Rev Public Health 22:69-89.
Schneider, E. C., and T. Lieberman. 2001. Publicly disclosed information about the quality of health care: response of the U.S. public. Qual Health Care 10 (2):96-103.
Schneider, E. C., V. Riehl, S. Courte-Wienecke, D. M. Eddy, and C. Sennett. 1999. Enhancing performance measurement: NCQA’s road map for a health information framework. National Committee for Quality Assurance. JAMA 282 (12):1184-90.
Scientific Registry of Transplant Recipients. 2001. “ustransplant.org - Transplant Statistics -Annual Report.” Online. Available at http://www.ustransplant.org/annual_reports/ar02/ar01_appendixh.html [accessed Sept. 30, 2002].
Simon, L. P., and A. F. Monroe. 2001. California provider group report cards: what do they tell us? Am J Med Qual 16 (2):61-70.
Webster, S. A. 2001. “Detroit News Online: Hospitals Slow to Adopt Electronic Drug System.” Online. Available at www.detroitnews.com/2001/0107/09/a01-245238.htm [accessed July 3, 2002].