George Isham, M.D., M.S.
HealthPartners, Minneapolis, MN
DR. ISHAM: Thank you very much. I appreciate the invitation to be here, and I bring you greetings from Minnesota. The Institute of Medicine, in its report Crossing the Quality Chasm, concluded that the American health care delivery system was in need of fundamental change. In its view, current care systems cannot provide the quality of care that is needed, and simply trying harder will not work, but changing the systems of care will.
It is suggested in that report that high-performing, patient-centered teams are important in producing safe, effective, efficient, equitable, timely, patient-centered care. These teams require organizations that facilitate the work of the teams, and organizations require a supportive payment and regulatory environment. My comments will focus on the next steps needed for the creation of a national environment to support high-quality health care. I think there is a lot of work to be done to create the systematic approach called for in Crossing the Quality Chasm.
It is my belief that the United States should begin constructing a national support system to assure health, safety, and quality of health care. In other words, the environment needs to be structured to enable the production of safe, high-quality health care. This new system of supports requires elements that are needed at the national, at the state, and at the organizational levels. Dr. Gail Amundson and I at HealthPartners have described and used a seven-step process model for quality improvement to achieve substantial improvement in health and quality of care at HealthPartners in Minnesota. I will take this model as the organizing framework for my comments.
Step one in the model is to define focus. In other words, set goals or
objectives for the health care system. We did this at HealthPartners in 1994, revised it in 2000, and now have a third edition that we are working with today. The second step is to agree on best practice. In other words, make sure that the best science and evidence is the basis for the interventions that you are going to design to achieve those goals you established in step one. The third step is to establish standard measures and collection methods. The fourth step is to set performance targets against those standard measures. The fifth step is to align incentives in support of achieving the targets that were established in step four. The sixth step is to support the improvement process that is required by the care delivery system to achieve the targets set in step four. The seventh and last step is to assess and report on progress.
We think of these seven steps as a cycle, so that each time around the cycle there is improvement in the steps as well as progress toward the goals. We are now entering our third five-year improvement cycle at HealthPartners.
So, let us take these seven steps one by one and envision how they could be applied to the nation as a whole. Step one is to define focus. In my view the Department of Health and Human Services and other private and public entities should focus on the 20 priority areas that we identified in the IOM report Priority Areas for National Action, issued in 2003.
The recommendation from Crossing the Quality Chasm was to look at areas that had the potential for significant impact in reducing disability and death. In other words, look where there was a big gap to be closed. In addition, a potential for improvability is important, in that there is a reasonable chance to close the identified gaps. Lastly, inclusiveness is important in that the priorities involve many treatment settings, many ethnic and socioeconomic populations, all age groups, and preventive care through end-of-life care in many types of institutions, rural and urban.
If we were to achieve substantial progress on these 20 priority areas, the nation would be much better off. More important, we would in fact have learned a lot of lessons that we could apply to many other conditions. These could then be applied to creating the new systems of care that we need and to executing that transformation that is called for in Crossing the Quality Chasm.
Some progress has been made in the two years since the release of Priority Areas for National Action. For example, the priority areas have been endorsed by the National Quality Forum, which develops consensus measurement standards for the nation. But there is a lot more that could and should be done.
For example, for each of the priority areas, specific strategies for the reduction in the gap between current and potential performance need to be identified. These strategies should address the provision of safe, timely,
efficient, effective, and equitable patient-centered care in each of these priority areas.
Overuse as well as underuse needs to be addressed by these strategies. Waste needs to be driven out of the system so that one can use the resources and apply them elsewhere.
The second step is to agree on best practice. Resources such as the Centers for Disease Control’s Guide to Community Preventive Services and the Agency for Healthcare Research and Quality’s Guide to Clinical Preventive Services provide useful evidence-based tools for us to use in developing best practice. The medical literature and the National Guideline Clearing-house also provide resources for the creation of evidence-based interventions that address the 20 priority areas.
We need better databases that include the information from all clinical trials, not just those that are published. We need a national system for assessing and displaying the quality of those trials. We need national technology assessments of new health care technologies, so we know what works and what doesn’t. We need evidence on the effectiveness of drugs and devices as compared with alternative treatments because we don’t know that today. There are, unfortunately, conflicting guidelines and advice out there based on incomplete or missing information on the effectiveness of those treatments.
So, for example, with cervical cancer screening, an old technology, HealthPartners has a pretty good performance level: 80 percent of the women in our system were screened in 2004. The national average that year was 81 percent. The 90th percentile was 90 percent. So our rate is pretty good, but it could be better. We work very hard at getting better, but if you get underneath that figure by using the new electronic medical records capability that we now have at HealthPartners, the appropriate use rate for that test is about 34 percent. In other words, in financial terms, $1.9 million of services could be used or should be used to address Pap smear use in the women who are not getting it, but $8.8 million worth of services are being used by women who have already had hysterectomies or who have had more than the recommended frequency of those tests. One possible reason is that there are conflicting guidelines that make recommendations for that test that create excess demand for the test by women who are concerned about the possibility of developing cervical cancer. So, the U.S. Preventive Services Task Force recommends a Pap smear at least every third year; the American Cancer Society recommends one every year. As a consequence, many women have been educated that they should have that test every year.
If we had clarity and consistency from the authoritative groups recommending screening, and assuming that at least every third year is an effective regimen for screening, the difference of the $8.8 million in pos-
sible excess tests minus the $1.9 million to be spent for those women not receiving the test would be $6.9 million saved. That would provide for just about half of the uncompensated care we provided at our hospital in St. Paul in 2004. So, mobilizing waste from this one screening test and applying it to funding care for those without financial resources in this case would go a long way toward addressing a pressing access-to-care issue in St. Paul, Minnesota. Different clinical practice guidelines based on different recommendations from the different specialty societies are confusing to professionals and the public. Conflicting guidelines are known to be a significant barrier to the effective implementation of clinical practices guidelines. This issue needs to be addressed. Guidelines need to be harmonized across specialty and advocacy groups. Differences in them should drive research agendas, not political advocacy. The country can’t afford the consequences of these differences.
The second example of conflicting science as a barrier to better quality of care also comes from Minnesota. There are at least seven different guidelines for preventive care for children in Minnesota, among them one promulgated by the American Academy of Pediatrics, one by the American Academy of Family Practice, one by the U.S. Task Force on Clinical Preventive Services, one by Minnesota Medicaid, and one by the Institute for Clinical Systems Improvement in Minnesota, which is used by 75 percent of the clinicians in the state. Which one should we code in our automated decision support systems that Elliott wants us to deploy in the medical records systems that are now in place in Minnesota? Does the confusion over preventive care standards for children make a difference? I think it does. Does it matter? I think it does.
A solid evidence base for interventions that are directed by strategies that have been devised to close the gaps in performance in each of the priority areas would give a sound scientific discipline to the effort to transform care. It won’t be easy, but it will give discipline to a systematic approach to achieve our priorities for improvement. It will also help stimulate innovation and more research on more effective ways to put evidence into practice.
The third step was to establish measurement standards. Much has been said about measurement standards in this town. The whole conversation often seems to be about measurement standards. There are a lot of different measurement standards out there. The health employer data information set, or HEDIS, has standards for health plans; JCAHO has standards for hospitals; and the Consortium for Physicians has measurement standards for physicians. The National Quality Forum has developed a consensus process for measurement standards and approves them, but it hasn’t resulted so far in a reduction in the number of measurement standards. We don’t have a specific and detailed set of valid, accurate, reliable
standardized measures of quality that are linked to the evidence-based guidelines and interventions that have been determined by the specific strategies we need to adopt to close the gaps in care that cause the IOM 20 priority areas to be our national priority areas. We don’t have that logic or consistency or coherence of effort, and that should be changed. Furthermore, and more fundamentally, we do not have a system for the collection of these measurements across all payers and providers of clinical services that produces relevant information for the nation, for states, and for local health care organizations that will guide and drive their efforts at closing gaps in the 20 priority areas.
The Ambulatory Quality Alliance has proposed a national data stewardship board that could set those standards. Regional data collection pilots are being talked about that could set up regional data collection agencies that would engage local health care systems and physicians and give feedback for their performance using the measurement standards. I think that kind of national system of regional support organizations would be a useful direction.
IOM, as we heard earlier, will also soon produce a report on this topic that I eagerly look forward to reading.
The fourth step is to set targets. Aggressive targets need to be established for each of the measurement standards in each of the priority areas. For example, among the 20 priority areas, the aim for diabetes was to prevent the progression of diabetes through vigilant systematic management of patients who are newly diagnosed or at a stage of their disease prior to development of major complications. Our goal at HealthPartners in 1994 was to reduce complications in persons with diabetes by 30 percent by active management of those cases. Another example of an aggressive target for the population of 20,000 people with diabetes is to obtain a measurement for hemoglobin A1C of less than 7.
So there need to be very specific aggressive targets for each of the measurement standards developed to monitor the progress for each of the strategies that help close the gaps identified for each of the priority areas. A national system of targets would be similar to those developed for Healthy People 2010, but these would be focused on closing the quality gaps in the 20 priority areas.
The fifth step is aligned incentives. Elliott has already referred to this in his comments. There are many efforts at the Centers for Medicare and Medicaid Services and in the private sector to pilot this. There are over 100 pay-for-performance demonstrations in the private sector going on today. Incentives need to focus on supporting the achievement of the aggressive targets we set that are assessed by the standard, valid, reliable measures applied to the evidenced-based interventions determined by the
strategies devised to close the gaps of performance identified in the 20 priority areas.
One of the approaches we have used is to embed incentives in product design to identify high-performing networks. The providers are graded against the quality and cost of their services, and co-payment differentials are established to provide incentives for patients to use the high-performing providers. Incentives for quality can also be used in bonus programs, which we have been doing for seven years. We pay bonuses for the achievement of those specific targets that are linked to our priorities, for example, achieving that hemoglobin A1C of less than 7. A third way to deploy incentives is to use contract incentives for individual providers in a way that rewards improvement as well as achievement of specific performance targets. Finally, and most controversially, a way of using incentives is to not pay for things that shouldn’t happen, such as the National Quality Forum’s “never events.” These never events are serious events that the National Quality Forum identified as safety issues that shouldn’t happen, for example, cutting off the wrong leg during surgery, sending a mother home with the wrong baby, or giving a contaminated medication or blood supply. We are still the only organization in the country that has such a policy, although it is being discussed by a number of states, some other health plans, and the Physician Payment Review Commission. It is probably not sufficient to put new money in the system in the form of bonuses for quality or to simply establish targets to achieve health targets and goals. Disrupting cash flows that support the wrong thing happening is also needed. A lot of cash is flowing in health care for the wrong thing.
Step six is to support improvement. Care needs to be redesigned by those who are providing it. A series of regional support systems needs to be established that assists providers in the skills and techniques of quality improvement and that is linked to the data collection and reporting for the region and the health information infrastructure. A health technology infrastructure needs to be established that gives guidance on standards for health information exchange and that enables much of what we have discussed so far.
We collectively pay about $3.5 million annually in Minnesota for our quality improvement organization, the Institute for Clinical Systems Improvement, and about $800,000 annually for our collaborative measurement organization, the Minnesota Community Measurement Collaborative, which collects and reports information on the performance of doctors in Minnesota.
If you add those two budgets together and multiply by 50, you get $215 million annually. The country could have such a system of support for quality improvement for $215 million annually. That is roughly the
cost of two of those new F22 fighter jets. It is cheap and well worth it, and we ought to get on with it.
The last step is assessing and reporting progress. Progress has been made. The Agency for Healthcare Research and Quality has established the National Quality Report and has been reporting now for two years. Increasingly the focus must be on the 20 priority areas, and the results need to be disseminated more effectively to the public so that we who provide the care can be accountable for it. A version of this report deployed at the regional level that highlights specific performance of local providers in achieving the targets that have been set based on the strategies developed to close the gaps in the 20 priority areas should be established.
In my opinion Congress and the Executive Branch of government also need to provide the necessary support for monitoring the ongoing progress and updating these priority areas over time as called for in the National Quality Report.
Priorities should change over time as we achieve our targets. This needs to be a dynamic and living system. Dynamic and living systems require nourishment in the form of adequate funds and leadership.
Using this system at HealthPartners over the past 20 years for the management of diabetes, our average hemoglobin A1C has fallen from 8.7 to below that target of 7.0 to 6.8. Average systolic blood pressure in this population of 20,000 persons with diabetes has fallen from 134 to 122. Amputation, which is a complication of diabetes, has fallen from 10 to 4.5 per 1,000 persons. Heart attacks have fallen from 16 per 1,000 to 12 per 1,000, and new cases of retinopathy or eye complications from diabetes from 78 per 1,000 to 62. The average cost per diabetic patient is estimated to be $2,000 under the predicted cost per diabetic patient at 10 years. For 20,000 diabetics that is roughly $40 million in costs saved over 10 years. Cost and quality are indeed linked.
I think that we need a system that can achieve what is analogous to this performance on a national level. To achieve that, we need leadership from physicians on improving quality of care. I am encouraged to see the American Board of Internal Medicine and leading specialty societies really addressing the issue. In the specialty societies and hospitals, leadership is also needed. The states need to lead as well, by developing regional examples of the seven-step system that I have outlined here. And as I have stated, there is much more that needs to be done by the federal government.
In my opinion the United States should begin to construct a national seven-step support system to ensure health and safety of patients and the quality of their health care. In other words, the environment needs to be restructured to enable the production of safe, high-quality health care.
DR. FINEBERG: Thank you very much, George, for a wonderful consideration of specific actions and steps that are particularly important. There was a commonality with a lot of what Elliott introduced in terms of the relationship between cost savings and improving quality, an important concept.
Our third speaker is Lucian Leape, who has been a sage in this field of quality improvement and improving patient safety. Lucian achieved renown and international distinction as a pediatric surgeon and along the way became increasingly interested in the larger issue of safe and good quality health care.
For the past 20 years or so, he has been increasingly forceful, effective, and outspoken on the importance of the problem and the specific needs for those in the profession and around it to take steps to improve the safety of care.
Lucian was one of the people who served on the Committee on the Quality of Health Care in America, which produced the original reports To Err Is Human and Crossing the Quality Chasm.
It is a great pleasure to welcome and introduce to you Dr. Lucian Leape.