Cover Image

PAPERBACK
$59.75



View/Hide Left Panel

16
Research and Capacity Building: Issues Raised by the Institute of Medicine Report

Harold S. Luft

The chapter on research, training, and capacity building in the Institute of Medicine (IOM, 1990) report outlines in substantial detail the research agenda and questions concerning capacity building, so I will take this opportunity to give my perspective of the rationale behind these recommendations. My view is that this is not just the usual researchers' tag line at the end of the paper that reads "and more research is needed." From my perspective, it really arises from frank fear and terror concerning the implications of our larger agenda and the problems facing the national implementation of a quality assurance program.

The fear and terror arise from the gap between "policy-relevant research" and something ready for routine implementation. Research always needs to narrow the focus, to select the cases, to look at the underlying signal, and not to get confused by random, extraneous noise. We were reminded during the conference that in the data base on patients undergoing cardiac catheterization, only 6 to 12 percent would have met the criteria for inclusion in the usual randomized controlled trials. Researchers need to focus on homogeneous populations to evaluate in a reasonable fashion, with constrained research dollars, the effectiveness of a new treatment or approach. Narrowing the focus increases the ratio of "signal to noise." That is the research role.

Practitioners, however, are faced with large amounts of noise and a little bit of signal underneath. They cannot say, "Well I won't treat you because you are not in the 6 percent that meets the criteria of a controlled trial." They have to treat the whole population.

In an analogous sense, much of what has informed the IOM committee in its efforts to come up with suggestions for changing the way quality assessment and quality assurance are done, is based upon research results. However, we always remember deep in our guts—I would not say our hearts neces-



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 130
Medicare: New Directions in Quality Assurance 16 Research and Capacity Building: Issues Raised by the Institute of Medicine Report Harold S. Luft The chapter on research, training, and capacity building in the Institute of Medicine (IOM, 1990) report outlines in substantial detail the research agenda and questions concerning capacity building, so I will take this opportunity to give my perspective of the rationale behind these recommendations. My view is that this is not just the usual researchers' tag line at the end of the paper that reads "and more research is needed." From my perspective, it really arises from frank fear and terror concerning the implications of our larger agenda and the problems facing the national implementation of a quality assurance program. The fear and terror arise from the gap between "policy-relevant research" and something ready for routine implementation. Research always needs to narrow the focus, to select the cases, to look at the underlying signal, and not to get confused by random, extraneous noise. We were reminded during the conference that in the data base on patients undergoing cardiac catheterization, only 6 to 12 percent would have met the criteria for inclusion in the usual randomized controlled trials. Researchers need to focus on homogeneous populations to evaluate in a reasonable fashion, with constrained research dollars, the effectiveness of a new treatment or approach. Narrowing the focus increases the ratio of "signal to noise." That is the research role. Practitioners, however, are faced with large amounts of noise and a little bit of signal underneath. They cannot say, "Well I won't treat you because you are not in the 6 percent that meets the criteria of a controlled trial." They have to treat the whole population. In an analogous sense, much of what has informed the IOM committee in its efforts to come up with suggestions for changing the way quality assessment and quality assurance are done, is based upon research results. However, we always remember deep in our guts—I would not say our hearts neces-

OCR for page 130
Medicare: New Directions in Quality Assurance sarily—that such research results are usually based on those carefully designed studies of relatively homogeneous populations in one or two settings. The question of generalizability, validity, reliability, and applicability to the real world of 5,000 or 6,000 hospitals, 400,000 physicians, and several million nurses and other health care practitioners has not been tested. It is a little bit scary to think about implementing a proposal based upon such a "thin" body of research. In fact, if someone gave me a magic wand and offered me a choice—take your chances with the Congress and a 10-year agenda and maybe have the strategy go through (or maybe not), or wave the magic wand and have the whole program implemented tomorrow—I would take my chances with the Congress because I am not sure it would work as we outlined it. The committee was not sure it would work; that is why we identified a 10-year agenda with substantial vagueness and many unanswered open questions. In essence we were saying that we do not know the specifics, and it is going to take at least 10 years to get from here to a point where we might have something ready for "prime time." VARIABILITY To get ready for prime time we need to look at tasks that might be usefully categorized under the headings of basic research, applied research, and dissemination. Then we can move to capacity building. The conference discussions often turned to variations—everybody knows that there is wide variability in how certain kinds of procedures and techniques are applied; what we do not yet know is what accounts for that variability. Do the variations reflect uncertainty, in other words, a lack of science? Put another way, God has not told us that one of these two techniques really works better. Do the variations reflect unmeasured clinical factors? Even when you include only 6 percent of a population in a randomized trial, unmeasured clinical variables account for differences in outcomes. That is, is one technique truly better than the other when appropriately applied for the right criteria and conditions, but we do not yet know what those conditions are? That would appear to be random noise, looking as though it is sometimes better, sometimes worse. Do variations reflect patient preferences, or do they reflect variable competencies of providers? Finally, some patients really like one approach rather than another, as Albert Mulley (1991) pointed out. Most of the things we are looking at are not single shots of penicillin produced under a very tightly controlled manufacturing process. Instead, they are interventions applied by people in organizations with varying levels of quality. Most of them would be considered reasonably good, but some

OCR for page 130
Medicare: New Directions in Quality Assurance are better than others. If so, you would naturally expect different kinds of outcomes. In fact, variations in outcomes are probably not due to any one of those four factors reflected in the above; rather, variations are most likely due to some combination of those four things, and we need to determine their relative importance. Furthermore, the relative importance of the several explanations probably depends on the setting, the intervention, and other circumstances. That is a lot of research when you consider the number of different procedures and the number of different medical conditions to which they can be applied. The answer for one problem is not going to be the same as the answer for another in terms of the relative importance of scientific uncertainty, patient variability, patient preferences, and provider quality. We know that there may be different relative weights, but we do not know what the weights are. PROCESS MEASURES There is a long history of using process-of-care measures as the yardstick for quality measurement. We certainly need to look at technical aspects of quality—whether the procedure was done appropriately—but that requires explicit criteria. How do we develop clear, valid, reliable, flexible, and clinically adaptable standards? Sheldon Greenfield has done a lot of work on criteria mapping, narrowing down the problem by using branching logic to give us a better handle on aspects of good quality care. The question is, now that we know it can be done for certain things, what proportion of all patients can be criteria-mapped into a category such as ''yes, this is good"; or "no, it is not"? It is one thing to know that it can be done. It is another thing to go into a Medicare Peer Review Organization (PRO) and say, "Okay, here's the list; apply it to all the patients." ART OF CARE The art of care is extremely important, and it is not just warm, fuzzy stuff—I know; I am from California. There is good evidence for the placebo effect, that is, that patients react to sugar pills as well as to real medicine. There is also anger and frustration with a medical care delivery system, even a system that is just not delivering the food warm enough, that may have an impact on the patient's biological outcome. This is not just patient satisfaction, but we are uncertain how to measure it. I suspect patient satisfaction directly affects patient outcomes, as well as being a separate measure that patients talk about.

OCR for page 130
Medicare: New Directions in Quality Assurance OUTCOMES Severity Adjustment If we are going to look at outcomes, we need to have severity adjusters. Here is a substantial policy problem. As you start looking at outcome measures, anybody who ends up on the wrong end of the quality assessment measure says, "Well, you didn't appropriately adjust for severity. Of course, every case is different." At some point we may have sufficiently accurate severity adjusters to satisfy everyone, but I doubt it. We have to recognize that severity adjustment is not just a problem for former econometricians who use big data sets. Severity adjustment is needed even in the classic randomized trial. All randomization buys is the lack of a consistent nonequivalence between the two groups, the control and the experimental. If you run a study a thousand times with a thousand patients, on average you will wash out all of the nonequivalence. If you do this study only once, the groups can be nonequivalent, even with the best randomization, so you have to look at age, gender, and all of the other things that could potentially account for differences. That is severity adjustment, even in the context of a randomized trial, and it could very well be that inconsistent results across various studies may in fact be a consequence of nonequivalence of the underlying populations. Health Status We need improved measures of health status and functional outcomes beyond dead or alive. For some patients, it is not clear which is better, and one needs to look carefully at this. (For example, some hospitals with high death rates claim that they are sent patients who are terminally ill.) There are several good measures of functional status, but more are needed, especially for subgroups of particular importance to the Medicare population such as the frail elderly and the homebound. We also need conceptual work on developing summary scores and comparing different measures. These are not simple problems. For example, consider something as clear and as objective as evaluating automobiles. Consumer Reports comes out with rankings of cars every year, but the Consumer Reports rankings are different than the ones developed by Road and Track. The rankings depend on what sort of car you like and how you like to drive. Likewise, patient reports of, or preferences for, level of health may be very different when different scales are used.

OCR for page 130
Medicare: New Directions in Quality Assurance CONTINUOUS IMPROVEMENT During the IOM committee deliberations (and this conference) we have heard about continuous improvement models. I was very pleased to hear Chip Caldwell's discussion of the program at West Paces Ferry Hospital (Caldwell, 1991). We now know that one operating model really exists; earlier we did not. That is important because, as Alain Enthoven is fond of pointing out, economists are very good at spending a lot of time proving theoretically that certain things cannot happen. What an empiricist does is bring one into the room and show you that it exists. So, yes, there is a continuous improvement model program at West Paces Ferry. What we do not know is, how important is the selection effect associated with its presence? I am sure that if we went back two years ago, we could have found, somewhere in the country, another hospital that was doing something that looked like a continuous improvement model without the same terminology. There may be good managers and not so good managers. Can the continuous improvement model be transferred outside of the Hospital Corporation of America without transferring those people, with that corporate culture and with that environment? Can it be implemented effectively in a random sample of hospitals, not a self-selected one? How do we take the special circumstances of the medical care system with its substantial regulatory overlay—licensing, for example, and certain publicly designed rules and regulations about how organizations and individuals are expected to behave—and superimpose a continuous improvement model that by its very nature is saying, "Let's change the way we do some things?" A continuous improvement model might lead a hospital to decide that it is better to have nurses do certain things that physicians previously have been doing because, even though they are not licensed to do those things, they nontheless do them better. Should the hospital try it? What risks is it exposing itself to? Alternatively, should the hospital say, "We can't accept this continuous improvement model because its logic would lead us to want to do certain things that, however reasonable, are illegal under current regulations"? What would happen if "radical" changes were implemented under a continuous improvement model and there was a malpractice suit because of the deviation from standard practice? LINK BETWEEN PROCESS AND OUTCOME There is a wide range of issues in the area of applied research. What is the linkage between process and outcomes? Sometimes when doing quality assessment, we are going to want to focus on one (e.g., processes) rather than the other. It is very hard to think about applying outcome measures to

OCR for page 130
Medicare: New Directions in Quality Assurance individual physician office visits because they are usually a small part of a large episode of care. We can, however, look at the process to see if it makes some sense. By contrast, we might use outcomes for population-based measures or for long periods of care, such as home care settings. However, once we start to employ multiple measures, how do we apply them with an even hand? For example, if we are going to sanction a practitioner or provider for poor care, is it fair to sanction one group based on process measures and another based on outcome measures? How much poor process is considered equivalent to an excess number of deaths? PRACTICE GUIDELINES As the IOM committee was finishing its report, the U.S. Congress mandated the Agency for Health Care Policy and Research to explore the area of practice guidelines. Many questions can be posed at this juncture. What are the criteria for choosing guidelines? How applicable will they be to the broad range of clinical practice? Is the health services research community going to be able to deliver? (Probably not, but we could ask that they be held to no stricter standards than the Congress with respect to Gramm-Rudman-Hollings.) How will these guidelines be implemented? Informing the practitioner community about them can be done through publication in the Journal of the American Medical Association or somewhere else, but how do you then encourage behavior change? What if information alone is not enough? That then gets to the question of how best to change and modify professional behavior. This means taking into account the problems of applying guidelines to clinically diverse patients, assuming that optical disks filled with specific indications down to the individual patient level are not a realistic option. If you cannot do that, then you must draw guidelines more broadly to account for wide variability in severity, indications, comorbidities, and similar factors. As you do that, the guidelines become broad enough to allow an enormous variability in practice for situations that really should be handled in the same way. How do you deal with that kind of conflict? What is the most appropriate method for the diffusion of guidelines? What is the value, positive or negative, of a government or professional society label on a guideline? What are the antitrust issues when one applies guidelines at a local level? Only a relatively small number of communities have a very large number of hospitals, for instance, New York, Chicago, Los Angeles, and Philadelphia. Once you start getting below that in size, you are getting into medical care communities in which everyone knows everyone else. They are all competing with each other. Fifty thousand people can be designated a metropolitan area, and there are over 200 such

OCR for page 130
Medicare: New Directions in Quality Assurance areas with fewer than five hospitals. How do you apply guidelines and assessment in that kind of environment while encouraging everybody to compete? SPECIAL SETTINGS Ambulatory Care Research needs to be done on the assessment and assurance of quality in different kinds of settings. Ambulatory care is far more difficult to assess than hospital care, yet that is where more and more of the action is taking place. We are not just talking about the standard office visit for an upper respiratory infection, but also, for example, free-standing cardiac catheterization units. Until several years ago, cardiac catheterization was always done in a hospital setting. Questions of appropriateness, poor technical quality, and the like can be just as important in such settings, but the organizational structure for quality assessment and assurance is very different. Long-Term and Community-Based Care Our committee did not look at the nursing home area because the IOM had earlier released a study on that topic. That does not mean, however, that these topics of long-term and community-based care do not need to be examined further and incorporated into an integrated system. Home health care is a major priority. We looked at it briefly, but partly because there is so little evidence, much more research needs to be done. One of the special problems in this area is the collection of data; no detailed routine medical record exists that can be unobtrusively reexamined after the fact. Moreover, the actual collection of outcome data—asking patients how they are doing—may, in fact, be a wonderful intervention and make them feel better. The ''Hawthorne effect" may actually be a desirable outcome. Health Maintenance Organizations Health maintenance organizations (HMOs) have often done quality assurance activities on their own, but one needs to take into account the different practice styles, different admission rates, and different kinds of settings in which HMOs deliver care. What is a fair comparison between an HMO practice and a fee-for-service practice? Maybe we should evaluate HMOs on a population basis, because they are responsible for populations, and similarly evaluate fee for service on a population basis and say, for instance, that "the fee-for-service practitioners in Philadelphia are just not

OCR for page 130
Medicare: New Directions in Quality Assurance doing a very good job relative to the HMOs there. You, the health care professionals, need to figure out what the problem is and work it out." Rural Settings Rural health care has a set of unique problems partly because there are few providers. This factor causes access problems, about which we heard repeatedly, but it also causes problems for quality assurance. How, for instance, would you get a reasonable external opinion when there are only two neurologists in the whole state? They are likely to be either partners or competitors, so whom do you get to review the other's charts? If you go out of state, then you have out-of-state standards, a situation that is often resented by those few practitioners or providers. In many instances, hospitals are so small that effective internal peer review may be impossible. FINANCING We also need to look at the effect of organization and financing issues on quality assurance. How well do various quality assurance methods work under different kinds of settings? For example, are they equally applicable in open-staff and closed-staff hospitals? What if the hospital starts marketing its services in competition with its medical staff?. What about the integration of incentive systems, pulling together Medicare Part A and Part B payments in an HMO or under selective contracts. For example, how would things change if the Health Care Financing Administration started selectively contracting for coronary artery bypass surgery and other specialized care and said, "We'll give you a lump sum. You handle both medical and hospital costs and quality assurance as the whole package." DIFFUSION There are also research issues in diffusion. We need to think about data systems and hardware. How can we pull together a wide variety of data and make them equally reliable and valid, so that, in fact, data are being recorded in the same fashion? These are not idle questions. As part of another study, some colleagues and I examined discharge abstract data from California and found that for one hospital, 51 patients undergoing cardiac bypass surgery had exactly 13 procedures listed; few had either 12 or 14. Somebody must have been running a protocol, and all patients received 13 procedures during their stay, or at least they were all recorded as having had 13 procedures. Aberrant patterns such as this are sure to cause problems when analyzing data across hospitals.

OCR for page 130
Medicare: New Directions in Quality Assurance CAPACITY BUILDING We also need substantial efforts to build capacity because we do not have the human resources to get answers to these questions, both in terms of the underlying research side and in terms of applying them at the local level. What should be the role of continuing medical education courses? What should be the role of professional associations in encouraging careers in quality assessment and quality assurance? We need to identify a viable career path for people who really want to do quality assessment and assurance, rather than treating it as just a side issue done over a sandwich once a month as part of a medical staff commitment. We need to figure out how to add courses to undergraduate medical, and other professional education curricula to encourage health care providers to look at patterns of care, to think about patient preferences, to consider variability in outcomes—and to do so as a normal, routine activity rather than considering everything as an individual situation. We also need to consider how to educate patients by using various forms of media—to encourage them to ask questions, to point them to information resources, and to help them become accustomed to viewing outcomes as a probabilistic phenomenon. People need to move away from the notion that they are definitely going to get better or that this is too dangerous an operation because they may die. Rather, they need to develop an understanding of what it means to have a 2 percent risk of death. Most people do not understand that at all in any intuitive sense. Yet, we are now saying physicians have to inform patients about risks. Risks are not yet information, they are data. What we need to do is think about how to provide usable information rather than data. FUNDING In terms of funding, we are talking here about approaches, issues, and problems that are not just Medicare oriented. Our charge was to focus on quality assessment and quality assurance in the Medicare program, but the basic tools, the capacity building issues, the underlying research that needs to be done, are really a public good. They will affect all patients in all settings, with perhaps some minor exceptions—if you focus on Medicare models, you are not going to do a lot on pediatrics, but one needs to take a broader perspective. The notion of the research and capacity building for quality assurance being a public good means these activities will be underfunded if they are left to private sources. Consequently, there needs to be a federal commitment to doing more in this area. We have already seen an increased commitment. What we need to keep in mind is that this is a long-term agenda. We need to build the capacity. We need to start doing the basic

OCR for page 130
Medicare: New Directions in Quality Assurance research. It will take time for the results to come out. It cannot all be done immediately, but we need to begin somewhere. We have tried in this IOM report to outline an agenda to help point us toward where we should begin. REFERENCES Caldwell, C. Organization- and System-Focused Quality Improvement: A Response. Pp. 37-43 in Medicare: New Directions in Quality Assurance. Donaldson, M.S., Harris-Wehling, J, and Lohr, K.N., eds. Washington, D.C.: National Academy Press, 1991. Institute of Medicine. Medicare: A Strategy for Quality Assurance. Lohr, K.N., ed. Washington, D.C.: National Academy Press, 1990. (See especially Volume I, Chapter 11.) Mulley, A.G. A Patient Outcomes Orientation: The Committee View. Pp. 63-72 in Medicare: New Directions in Quality Assurance. Donaldson, M.S., Harris-Wehling, J., and Lohr, K.N., eds. Washington, D.C.: National Academy Press, 1991.