The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Medicare: A Strategy for Quality Assurance - Volume I
tests (whether positive or negative, or specific findings) are typically not recorded. New technologies or established technologies used in totally new ways may not be given codes for several years (PPRC, 1989).
Third, errors in recording and coding events can threaten the validity of the data. Some errors in recording are random, but some are systematic, especially if there are financial incentives for “upcoding” (systematic coding for services that are more intense or extensive and thus better reimbursed than the one actually provided) and “unbundling” (billing every component of a procedure separately) (PPRC, 1989). Depending on who is completing a form, and his or her incentives and training, the data will vary in accuracy. Diagnoses on hospital records are more likely to be accurate than diagnoses on outpatient-visit claims, although sometimes outpatient diagnoses can be grouped around a type of problem (e.g., gynecologic problems) to minimize this weakness.
Fourth, the data bases include only contacts with the health care system and of these, only contacts that generate a claim. A person who is ill but has no encounter with the health care system produces no record. Copayments and other barriers to access may accentuate this bias and lead to underestimates of poor outcomes.
Fifth, measuring the benefits of treatment is very difficult because positive outcome measures are not part of administrative data bases. Approximations may sometimes be attempted based on a decreased frequency of hospitalization or the length of intervention-free periods.
Sixth, analysis of large administrative data bases is a slow process. Even with major improvements in electronic data transfer and processing that are envisioned, it is not well-suited to rapid feedback of practice patterns.
Small Area Variations Analysis (SAVA)
SAVA is a way of using administrative data bases that has become a major area of research in its own right (Paul-Shaheen et al., 1987). Small area variations analysis can identify areas of high, average, and low rates of hospital services usage, but the methodology cannot discriminate appropriate from inappropriate care. As a problem-detection method, SAVA should be regarded as a screening methodology for alerting analysts about areas where quality problems may be occurring and for which more focused review may be needed. A strength of SAVA is that it can direct attention to potential areas of underuse as well as overuse.
Volume of Services (Individual or Organization)
After reviewing the literature on the possible relationship between volume of procedures done by institutions and the outcomes of those proce-