to have remarked, "When the Okies left Oklahoma and moved to California, they raised the average intelligence level in both states."
This bias has been noted in studies of changes in survival over time and in comparisons of survival across geographic areas (Farrow et al., 1995) or by hospital type (Greenberg et al., 1991). In a study by Greenberg and colleagues of patients with non-small-cell lung cancer, the significantly better mortality observed at university cancer centers than at community hospitals disappeared when functional status, instead of stage, was used to adjust the analysis. The patients diagnosed in academic cancer centers underwent more staging procedures (e.g., bone and liver scans) and tended to be assigned to a higher stage than similar patients diagnosed in community hospitals (Greenberg et al., 1991).
Another potential source of bias in observational studies is case selection. Findings from evaluations of the effect of managed care on cancer outcomes may not be generalizable if the study is limited to a convenience sample of a few plans. Techniques could be used to sample health plans, facilities within plans, and patients within facilities to obtain a nationally representative sample of patients.
Another factor that makes it difficult to interpret the available health services research literature is the possibility of "publication bias," which means that studies showing the expected relationship are more likely to be published than those that find no relationship. Evidence of this sort of bias exists for clinical trials and other types of research such as observational studies. Underreporting of negative results appears to be related to a failure on the part of investigators to submit manuscripts for publication, not to selective rejection of negative results by journal editors (Dickersin, 1997).
The next section reviews health services literature on hospital and provider characteristics and on managed care. The literature review is not exhaustive; only articles written in English were identified, and some studies of patients cared for before 1980 were excluded.
One structural measure that has been found to relate to outcomes for some conditions or procedures is volume, which refers to the number of times each year that a hospital (or clinician) performs a particular procedure or takes care of patients with a particular disease. Since the late 1970s, researchers have been studying this volume-outcome relationship. The area that has been studied most intensively is interventional cardiology, particularly coronary artery bypass graft (CABG) surgery (e.g., Hannan et al., 1995, 1997b) and percutaneous transluminal coronary angioplasty (PTCA, "angioplasty") (e.g., Ellis et al., 1997; Jollis et al., 1994, 1997). In all of these cases, a positive connection was found: the more procedures done per hospital (or where it was studied, per physician), the better are the outcomes, including fewer immediate deaths due to the procedures and lower complication rates. Similar findings have been reported for heart transplants (Hosenpud et al., 1994).
Volume-outcome relationships have been reported for other procedures and services, including hip replacement (Kreder et al., 1997), abdominal aortic aneurysm surgery (Hannan et al., 1992; Kantonen et al., 1997), craniotomy for cerebral aneurysm (Solomon et al., 1996), hip and