a standard against which to compare outcomes. As a consequence, we do not know if the institution with the best outcomes is not nearly as good as it should be, or if the institution with the worst outcomes is nonetheless doing quite well. We only know how they compare with each other. If the outcomes are not risk-adjusted, it can be even more difficult to interpret them. This does not mean that studies cannot use outcomes to shed light on variations in quality. For example, prescription of beta blockers after a heart attack is a frequently used measure of quality. One study found that only about one in five eligible patients with a heart attack received beta blockers within 90 days of hospital discharge and also that those who received the treatment were much less likely to die than those who did not (Soumerai et al., 1997). Another study showed that poorer quality of care for children with asthma was associated with more hospitalizations (Homer et al., 1996).
We found a similar limitation with using satisfaction ratings, which some consider a type of outcome. We do not report on levels of satisfaction because it is difficult to determine what is an acceptable level of satisfaction. There is generally no standard to which to compare the results, and we do not know whether the institution with the best satisfaction ratings could and should be doing much better.
Studies of access to care are not typically classified as quality-of-care studies, but a person who is unable to obtain health care could hardly be said to be receiving good quality care. Access studies are beyond the scope of this report. However, we need to keep in mind that quality-of-care studies often measure quality only for people who have interacted with the health care system and so tend to overstate quality of care received by the population as a whole (Franks et al., 1993a, 1993b; Lurie et al., 1984, 1986; Sorlie et al., 1994).
In general, structural measures have not been consistently shown to relate either to process quality or outcomes, but there are exceptions. For example, volume of care provided (in other words, the number of procedures performed or the number of patients cared for) by an institution or clinician has often been found to relate to quality (Hannan et al., 1989, 1995; Kelly and Hellinger, 1986; Kitahata et al., 1996; Luft et al., 1979; Phibbs et al., 1996; Riley and Lubitz, 1985; Stone et al., 1992).
Another type of study does not provide direct evidence of quality of health care but is useful for identifying reasons for poor quality. Studies in which physicians report what they generally do or what they would do for a particular scenario can be informative, especially when physicians report practices that indicate poor quality. Although these studies do not describe care provided to individual patients, they can indicate a need for further education or other efforts to improve clinical practices.
Finally, we note that our search mechanism almost certainly missed articles with relevant data. Many studies not intended as quality-of-care studies provide