Cover Image

Not for Sale



View/Hide Left Panel
Click for next page ( 24


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 23
Chapter 3 ANALYSIS The analysis was intended to assess the reliability of six abstracted information items chosen for study and investigate several factors which might affect data reliability, particularly for information on principal diagnosis and procedure. The effect of data reliability on hospital utilization statistics such as diagnostic specific admission rates and lengths of stay was also examined. TOTAL FREQUENCIES OF DISCREPANCIES Table 1 shows the frequency of discrepancies between the Medicare record and the Institute of Medicine (IOM) abstract for each data item selected for study. In general, the data were highly reliable for dates of hos- pital admission and discharge and the sex of a patient. Information was less reliable for data reflecting the principal diagnosis and principal procedure and whether additional diagnoses were present. [1] When there were discrepancies in these data, information on the IOM abstract was most frequently determined to be correct. Occasionally, the data pro- vided by HCFA and the IOM field team were equally acceptable. This was particularly true for diagnostic data, where 4.6 percent of all sets of abstracts had a different principal diagnosis on each data source and "either" diagnosis was an acceptable choice. The lower level of reliability for diagnosis is of particular concern because such information may be used to reflect disease prevalence, as well as patterns of hospital care and utilization of medical services, and may play an important role in determining policy directives such as resource allocation for specific disease categories. Therefore, a more detailed analysis of the problems associated with the abstracting and coding of these data was performed. 1 A similar pattern of agreement was also found in the independent assess- ment of the field work (see Appendix F.). 23

OCR for page 23
24 Table 1. Discrepancy Between Medicare Record and IOM Abstract and the Correct Data Source for Selected Items (weighted percent) . . . Correct data source where a discrepancy exists Selected Medicare IOM items No discrepancy Record Abstract Either Neither Total Admission 99.5 0.4 0.1 - - 100.0Z date Discharge 99.3 0.4 0.3 - - 100.0 date Sex 99.4 0.4 0.2 - - 100.0 Prlnclpat diagnosis 57.2 2.3 35.7 4.6 0.2 100.0 (four-digit) Presence of 74.5 1.3 23.5 0.7 - 100.0 additional dlagnosls Principal 78.9 1.7 17.3 1.7 0.4 100.0 Procedure Unweighted N = 4745 The analysis was guided by several factors considered in the previous study and thought to influence reliability, including: the potential inadequacies of current nomenclature, coding guidelines, and medical recording practices for definitively determining and cod- ing a principal diagnosis or principal procedure and the resultant need of abstracters to exercise some judgment which may lessen reli- ability; the degree of coding refinement (four-digit, three-digit, or broader diagnostic classifications such as AUTOGRP); the contribution of individual diagnoses to the overall discrepancy rates; the contribution of the actual coding and processing of claims infor- mation by HCFA personnel; and

OCR for page 23
25 . the contribution of structural and functional factors within the hos- pital that may affect the reliability of abstracted information, in- cluding the many paths by which data from the medical records are eventually received by HCFA. The influences of these factors on data reliability were individually considered. The analysis of diagnostic information is presented before that pertaining to procedures. An examination of the content and coding of the Medicare claim form follows. The analysis concludes with discus- sions of the implications for the accuracy of utilization statistics and the relative influence of hospital characteristics on data reliability. ANALYSIS OF DIAGNOSTIC INFORMATION In analyzing diagnostic information, the reasons explaining discrepan- cies between the diagnoses coded by HCFA and the field team were first explored in hopes of eliciting general clues about potential reasons for differences. The concordance between admitting and principal diagnosis was examined next to determine whether hospitals may submit admitting diagnoses, rather than principal, to facilitate reimbursement for the Medicare claims. In both analyses all diagnoses were combined. Sub- sequent analyses were progressively less aggregated to examine the ex- tent to which particular diagnostic groupings or individual diagnoses might contribute to overall accuracy at varying levels of coding re- finement. Finally, the influence of co-morbidity was explored. The analyses of information on both diagnoses and procedures are based on comparisons between the Medicare record and the IOM abstract, assum- ing that data on the Medicare record accurately reflect information from the claim form submitted by the hospital. This assumption was also tested, and the results are presented later in this chapter. Reasons for Discrepancies l To understand the lower reliability of principal diagnosis, the reasons selected by the field teem to explain discrepancies were analyzed. Tables 2, 3, and 4 show the reasons for discrepancies according to the correct data source for diagnoses compared at the fourth digit, third digit, and classified according to the AUTOGRP system. As noted in Chapter 2, the possibility of an ordering discrepancy (a discrepancy caused by uncer- tainty over whether a diagnosis should be considered as "principal" or "other") was to be ruled out before attributing an error to coding prac- tices.

OCR for page 23
26 Table 2. Reason for Discrepancy in Principal Diagnostic Codes Compared to the Fourth Digit by Correct Data Source (weighted percent) Correct data source Reason for Medicare IOM discrepancy Record* Abstract Either Neither** Ordering-SSA definition Ordering-hospital - 20.8 5.0 list Ordering-completeness 4.4 21.4 Ordering-judgment 2.6 Ordering-other 7.8 Coding-clerical 29.2 3.5 Coding-completeness 12.7 Coding-procedure 37.9 1.4 78.2 3.8 1.6 19.4 13.7 _ Coding-judgment 2.8 0.2 14.5 Coding-other 2.6 Total 100.0Z (Percent of total (2.3) number of abstracts) 14.6 0.7 100.0 100.0 (35.7) (4.6) (0.2) ^For some abstracts a reason for discrepancy was not checked by the field team when the Medicare record was correct. Reasons for discrep- ancies were assigned to those abstracts according to their frequency when they were assigned by the field team. **The analysis of cases for which "neither" was correct is not presented because the numbers are too small. When the Medicare record was correct, coding discrepancies generally oc- curred more frequently than ordering discrepancies. This was found at all three levels of coding refinement. When the IOM abstract was correct, the frequency of ordering and coding discrepancies was relatively equal if all four digits were compared. If only three digits or AUTOGRP com- parisons were made, coding discrepancies generally decreased and ordering discrepancies assumed greater importance. When "either" data source was correct, the discrepancies were invariably related to ordering problems.

OCR for page 23
27 Table 3. Reason for Discrepancy in Principal Diagnostic Codes Compared to the Third Digit by Correct Data Source (weighted percent) - .1 ~ Correct data source Reason for discrepancy Record* Abstract Either Neither** _ , Ordering-SSA - 1.4' definition Ordering-hospital - 23.5 5.3 list Ordering-completeness 5.0 24.0 Ordering-judgment 3.2 1.6 79.5 Ordering-other 9.9 4.1 Coding-clerical 19.6 3.8 Coding-completeness 10.4 12.4 Coding-procedure 46.3 12.6 1.7 Coding-judgment 3.4 0.2 12.7 Coding-other 2.2 16.4 0.8 Total 100.0Z 100.0 100.0 (Percent of total (1.9) (31.6) number of abstracts) (4~4) (0.2) -xtor some abstracts a reason for discrepancy was not checked by the field team when the Medicare record was correct. Reasons for discrep- ancies were assigned to those abstracts according to their frequency when they were assigned by the field team. **The analysis of cases for which .'neither.' was correct is not presented because the numbers are too small. When the TOM abstract was correct, most ordering problems were attribut- able to two common practices within hospitals--routinely using the first listed diagnosis on the face sheet as the principal diagnosis or deter- mining a principal diagnosis based on an incomplete review of the med- ical record. The predominance of these reasons for discrepancies was independent of the level of coding refinement. Anecdotal data trans- mitted informally by medical record and billing department supervisors to the field team indicate a considerable amount of variation among hospitals with respect to the definition of principal diagnosis.

OCR for page 23
28 Table 4. Reason for Discrepancy in Principal Diagnostic Codes Compared using AUTOGRP Classifications by Correct Data Source (weighted percent) Reason for discrepancy Correct data source Medicare IOM Record* Abstract Either NeitherX* . . . . . . . . . . Ordering-SSA definition. 1.8 Ordering-hospital - 32.3 list Ordering-completeness 1.1 27.1 Ordering-judgment - 1.7 81.6 Ordering-other 14.3 5.6 Coding-clerical 9.1 2.9 Coding-completeness 1.4 7.0 Coding-procedure 70.9 8.7 Coding-judgment - O.1 8.1 Coding-other 3.2 5.4 12.8 1.6 Total 100.0Z 100.0 100.0 (Percent of total (0.7) (17.5) (2.3) (O.1 number of abstracts) *For some abstracts a reason for discrepancy was not checked by the field team when the Medicare record was correct. Reasons for discrepancies were assigned to those abstracts according to their frequency when they were assigned by the field team. '*The analysis of cases for which "neither" was correct is not presented because the numbers are too small. The actual coding of a diagnosis was more of a problem when discrepancies were analyzed at the fourth digit, than if only the first three-digits were compared or if AUTOGRP was used. For coding discrepancies where the IOM abstract was correct, the reason usually given by the field team was "coding-completeness," suggesting that a narrative was selected to describe the principal diagnosis without completely reviewing the medical record. This occurred most frequently at the four-digit comparison level. Often a code "nine" was used as the fourth-digit on the Medicare record to indicate Knot otherwise specified,'" when a more careful review of the

OCR for page 23
29 record would have yielded a more specific narrative and corresponding fourth-digit code. (For example, code 560.9 indicates intestinal ob- struction without mention of hernia due to an unspecified cause, while code 560.1 indicates intestinal obstruction without mention of hernia due to paralytic ileus.) Another common reason for discrepancy was '~coding procedure," which occurred with relatively equal frequency at the three levels of diagnostic coding refinement. This reflects a rou- tine and systematic misuse or mix-understanding of the coding system, such as relying on either the alphabetic or tabular index, rather than using both. The "coding other" reason for discrepancy also was used with relatively equal frequency regardless of the level of coding re- finement. In 50.7 percent of these 207 cases, the diagnostic code listed by BCFA was 799.9, which indicates that the claim form did not contain acceptable diagnostic information, although the field team had coded a principal diagnosis. For most of the remaining cases in this category, the field team was unable to find any diagnostic information in the hospital record similar to that found on the Medicare record, so con- sideration of alternative discrepancy options was inappropriate. Discrepancies for which the diagnostic codes on "either" the Medicare record or the IOM abstract were equally acceptable account for 4.6 per- cent of the abstracts in the study when all diagnoses are combined and compared to four-digits. The most frequent reason for this decision was "ordering judgment," indicating an honest difference of opinion in interpreting the medical record. When three-digit or the AUTOGRP com- parisons were used, the percent of abstracts for which "either" source of data was correct was 4.4 percent and 2.3 percent, respectively, and again the most frequent reason for discrepancy was 'ordering-judgment..' This may suggest that in some instances the guidelines for determining principal diagnosis are not adequately specified. It also raises the possibility that for some patients, it may be unrealistic to expect reliable determinations of "the" principal diagnosis. The number of cases for which "neither" data source was correct is suf- ficiently small that the associated reasons for discrepancies are not discussed. In general, three basic problems account for discrepancies between the diagnostic codes determined by HCFA and the IOM field team. When the IOM abstract was correct, two problems identified by the field team reflect instances where remedial action could possibly increase the level of reliability. First, a more complete review of the medical record might reduce the frequency of both ordering and coding discrep- ancies which stem from the use of incomplete information. Second, more explicitly stated hospital guidelines for recording and transmitting diagnostic information and determining principal diagnosis might help. If the diagnosis listed first on the face sheet is assumed to be "prin- cipal," persons providing that information could be trained to assure that the assumption is correct. The third problem relates to abstracts where "either" diagnostic code is acceptable. In these cases, corrective action is difficult to identify, since the discrepancies stem from pro- fessional differences in interpreting a medical record. Although this

OCR for page 23
30 accounts for only a small percent of the abstracts, it nonetheless is important, since it identifies an area in which the determination of a single, reliable, principal diagnosis may not be feasible. Admitting vs. Principal Diagnosis It has been hypothesized that hospitals' need for reimbursement may cause them to forward claims to fiscal intermediaries containing an admitting diagnosis, rather than a more carefully established principal diagnosis. This likelihood was strengthened by the finding in the preceding section that many discrepancies between the Medicare record and TOM abstract stemmed from an incomplete review of the medical record by hospital per- sonnel responsible for determining the principal diagnosis. To explore this possibility, the field team determined an admitting diagnosis for each case, based only on information contained in the face sheet of the medical record, history and physical reports, and admitting or emergency room notes. This was compared with the principal diagnosis, based on a careful examination of the entire record. Table 5 indicates that for approximately sixty percent of the abstracts, the admitting diagnosis (determined retrospectively by the field team) is an accurate reflection of the principal diagnosis established after study to be chiefly res- ponsible for causing the hospital admission. When the diagnoses were different, coding refinement did not appear to be influential. Rather, the admitting diagnosis usually reflected symptoms or preliminary find- ings; after additional testing and medical investigation a more precise and different principal diagnosis was determined. Table 5. Discrepancies Between the Institute of Medicine Admitting and Principal Diagnoses and Reasons for Discrepancy at Varying Levels of Coding Refinement (weighted percent) No Complete- Refine- Invests discrepancy ness ment gation Other Total Four digit 58.4 0.5 4.7 33.2 3.2 100.0% Three digit 61.7 0.5 3.2 31.9 2.7 100.0 AUTOGRP class- 80.8 0.2 0.9 16.9 1.2 100.0 ification Because the more extensive medical investigation led to a considerable change in admitting diagnoses for about thirty-three percent of the cases, it appeared less likely that HCFA's principal diagnosis might in fact closely approximate an admitting diagnosis. This was confirmed when only about forty percent of HCFA's principal diagnoses agreed with the IOM's admitting diagnoses compared to four digits and about forty- six percent at three digits. When this analysis was limited to those

OCR for page 23
31 discharges where there was a discrepancy between the principal diagno- sis on the Medicare record and the IOM abstract and the abstract was correct, only about ten percent of the HCFA's principal diagnoses agreed with the IOM's admitting diagnoses compared to both three and four digits. Influence of Diagnostic Groupings The data presented in Table 1 show the frequency of discrepancies all principal diagnoses combined and compared to the fourth-digit. Tables 2, 3, and 4 reveal a decrease in coding errors when less specific diagnostic comparisons are used. In this section 5 the influence of dif- fering levels of diagnostic groupings is explored in more detail. for For most of the fifteen diagnostic groups under study, three-digit or AUTOGRP analyses may be acceptable for determining basic utilization statistics, such as admission rates. As described in Chapter 2, the AUTOGRP categories constituted the basis for drawing the sample of abstracts. Within each Diagnosis Related Group (DRG), specific diag- nostic sub-groups were identified because of their importance for the Medicare population and/or their inclusion in the previous re-abstract- ing study. Residual diagnostic sub-groups included all diagnoses in the DRGs except the specific diagnoses. Therefore, the reliability of data was examined for the entire DRGs combined, the specific diagnoses, and the residual diagnoses, using AUTOGRP, three-digit, and four-digit comparisons. The accuracy of data was not influenced greatly by aggregating the diag- nostic groups according to their reason for inclusion in the sample-- specific or residual sub-categories (see Table 6~. However, the level of reliability for all categories of diagnoses does vary according to the level of coding refinement, with increased reliability using AUTOGRP or comparing only three digits. For all diagnostic categories, the AUTOGRP comparisons were more reliable. The increase in reliability must be balanced against the loss of precision in the information, however. The percent of abstracts where the data on "either" the Medicare record or TOM abstract are equally acceptable decreases only slightly when AUTOGRP is used. Diagnostic Specific Discrepancies Table 7 shows the frequency of discrepancy and the correct data source for the individual specific diagnoses (the specific diagnostic sub-groups within the DRGs, many of which conforms to the "target" diagnoses in the previous study). The diagnoses with higher levels of reliability include cataract, inguinal hernia without obstruction, hyperplasia of the pros- tate, diverticulosis of intestine, and bronchitis. The categories with less accurate data include chronic ischemic heart disease, cerebrovascular diseases, diabetes mellitus, intestinal obstruction without mention of hernia, and congestive heart failure. The percent of cases where "either" data source was correct is highest for chronic ischemic heart disease, diabetes mellitus, and bronchopneumonia and unspecified pneumonia.

OCR for page 23
32 Table 6. Discrepancy Between the Medicare Record and the IOM Abstract at Differing Levels of Aggregating Diagnoses and the Correct Data Source Where a Discrepancy Exists (weighted percent) Level of Correct data source where . . . ~ . Aggrega- a discrepancy exists ~ . . . . Lion of No Medicare IOM diagnoses discrepancy record abstract Either Neither Total All diagnoses* AUTOGRP 71.7 1.2 23.3 3.5 0.3 100.0% Three-digit 68.2 1.5 25.9 4.1 0.3 100.0 Four-digit 61.9 2.4 31.2 4.2 0.3 100.0 Specific sub- categories AUTOGRP 71.3 1.2 23.7 3.5 0.3 100.0 Three-digit 67.8 1.5 26.2 4.2 0.3 100.0 Four-digit 62.5 2.0 30.8 4.4 0.3 100.0 Residual sub- categories AUTOGRP 73.9 1.5 21.6 3.0 - 100.0 Three-digit 66.3 3.6 27.1 3.0 - 100.0 Four-digit 60.0 4.2 32.5 3.3 - 100.0 *Includes only those abstracts in the first fifteen DRG's listed in Chapter 2. The sixteenth category was created the representativeness of the sample. It is not an actual DRG and had to be excluded from the AUTOGRP comparisons. It was excluded from the other comparisons as well in order to maintain a common denominator throughout the table. Therefore, the percents are different than in Table 1. Drimarilv to enhance

OCR for page 23
33 Table 7. Weighted Frequency of Discrepancy Between the Medicare Record and IOM Abstract and the Correct Data Source Where a Discrepancy Exists (weighted percent) . . . . . . Weighted Correct data source where percent a discrepancy exists of all abstracts Percent that each with no diagnosis discre- ~ ~ ~ ~ represents pancy ~ ~ ~ ~ ~ , . . Principal . . c Diagnoses on Medicare record Chronic ischemic heart disease Cerebrovascular diseases Fracture, neck of femur Cataract Acute myocardial infarction 9.8 6.9 2.0 3.0 2.4 36.8 4.0 58.5 3.5 70.5 3.0 97.3 0.2 67.3 50.3 7.6 1.3 100.0Z 33.8 4.2 - 100.0 26.5 2.5 1.0 28.8 2.9 100.0 100.0 100.0 Inguinal hernia without mention of obstruction 1. 3 96 . 7 - 2 . 7 0 . 6 - 100 .0 Diabetes mellitus 2.5 49.7 0.8 43.8 5.7 - 100.0 Hyperplasia of the prostate 2.1 87.1 0.4 8.0 4.5 - 100.0 Bronchopneumonia- organism not specified and pneumonia-organism and type not specified 2.8 Cholelithiasis/ cholecystitis 2.0 Intestinal obstruction with- out mention of hernia Congestive heart failure and left ventricular failure 1.7 Diverticulosis of intestine Bronchitis 1.5 0.9 58.4 86.5 0.7 89.8 75.9 - 18.2 5.9 - 100.0 62.8 1.2 34.0 1.7 0.3 100.0 58.0 2.1 36 .2 ~ 7 - 100-0 0.1 36.3 5.2 - - 9.1 4.4 ~ - 8.8 1.4 - 100.0 Malignant neoplasm of bronchus and lung 1. 2 79.9 - 17.7 2.4 - 100.0 All else 59.2 52.5 2.2 40.0 5.1 0.2 100.0

OCR for page 23
50 INFLUENCE OF DIAGNOSTIC DATA RELIABILITY ON UT ILIZATION STATISTICS The analyses of diagnostic information presented to this point are based on cases for which a specific diagnosis was listed on the Medicare record as principal and the field team either agreed or disagreed with that de- te'~ination. If there was a disagreement, the diagnosis on the Medicare record may be regarded as a false positive. However, there may also be cases for which the same specific diagnosis should have been listed as principal, but was not. These cases may be regarded as false negatives. The sampling plan permits an estimate of the extent to which both types of errors occur. More importantly, their influence on approximations of admission rates and lengths of stay can be explored. Table 20 helps to explain the methods for calculating these estimates. Table 20. Calculation of Net and Gross Difference Rates in Designation of Principal Diagnosis IOM abstracts coded as principal Medicare record coded as principal Specific diagnosis Other Total Specific . ~ C .lagllOS IS Other a b a + b c d c ~ d .. . . Total a + c b ~ d N Percent with no discrepancy = a x 100 a + c Gross difference rate Net difference rate In Table 20, the cases included in cell "a'' are those for which the specific diagnosis was coded as principal on both the Medicare rec- ord and IOM abstract. The total number of differences affecting that figure for any specific diagnosis is equal to the number of cases in- cluded in that class on the original Medicare record, but not on the IOM abstract (cell "c'.), plus the number included in that class on the

OCR for page 23
51 IOM abstract, but not on the Medicare record (cell "boy. Cell "d" includes all cases from the study population which do not have the specific diagnosis coded as principal on either data source. The sum of the number of cases in cells "b.' and "c," divided by the total number of cases in the population irrespective of diagnosis (N), may be termed the gross difference rate for the diagnosis in question. It reflects aggregate errors and usually includes differences in both -directions, which may be partly off-setting. The net difference rate is the difference between "b" and "c," divided by N. It is an estimate of the non-offsetting part of the gross error. A negative net differ- ence rate indicates that the influence of false positives is greater than false negatives. [1] Net and gross difference rates for the study diagnoses are in Appendix H. Net and gross difference rates are useful in comparing the relative ac- curacy of different diagnoses and for measuring changes in the reli- ability of data over time. In interpreting them, however, the reader should note that a change in the frequency of occurrence of a particular diagnosis in a population is not necessarily reflected in net and gross difference rates. The number of cases for which both assessments agree (cell "a") may change without altering net and gross difference rates. The implications for reliability of similar net and gross difference rates for diagnoses with dissimilar incidence rates may be quite dif- ferent. Therefore, the proportion of cases for which there is concor- dance between the abstract and re-abstract must be taken into account. If the concepts of false negatives and false positives are used in cal- culating admission rates and lengths of stay, the operational implica- tions of net and gross difference rates are easier to understand. Table 21 contains estimates of the distributions of specific diagnoses. Because of the absence of a population-based denominator customarily used to calculate admission rates, a proxy measure was computed based on the number of abstracts for Medicare patients with a particular diagnosis divided by the total number of Medicare admissions in the twenty percent sample. This is referred to as a "rate," although it is not, in the usual sense. The basic admission rates are based on the number of cases for which both the Medicare and IOM abstracts have the same principal diagnostic code (cell "a") divided by the total num- ber of admissions. The Medicare admission rates are calculated by div- iding the total number of Medicare records with a specific diagnosis (including false positives) by the total number of admissions. The IOM admission rates are calculated by dividing the total number of IOM ab- stracts with a specific diagnosis (including false negatives) by the 1 ~ U. S. Department of Commerce, Bureau of the Census, The Current Popula- tion Survey Reinterview Program: Some Notes and Discussion, Technical Paper No. 6 Washington, D. C.: U. S. Government Printing Office, 1963), pp. 8-9.

OCR for page 23
52 total number of admissions. The rates are analyzed to three and four digits. However, the IOM rates are the same for both four and three digits because the cases for which the Medicare records and IOM ab- stracts disagreed at only the fourth digit are shifted from cell '.b" to cell "a" in the three-digit comparisons. The numerator (a + b) remains the same and, therefore, the rate does not change.~2] As one would expect, the basic rates usually increase as one moves from four to three digits. The Medicare admission rates are consistently higher than the basic admission rates for both three and four-digit com- parisons, because they include the false positives. If t'h~ number of false positives is roughly equivalent to the number of false negatives, then the Medicare rates may be an acceptable approximation to the ''actual" rates. However, the IOM admission rates, which include the false nega- tives, are higher than the Medicare rates with the exception of chronic ischemic heart disease, diabetes, and malignant neoplasm of bronchus and lung. The under-estimation of admissions using Medicare data is partic- ularly noticeable for cerebrovascular disease and congestive heart fail- ure. This analysis can also be performed using cases from the entire DRG and comparing the diagnoses using the AUTOGRP classification system. When this approach is used (see Table 22), results are similiar to those obtained for the specific diagnoses within DRGs (see Table 21~. Medi- care data under-estimate the number of admissions with the exception of diabetes, miscellaneous diseases of the intestine and peritoneum, malignant neoplasm of the respiratory system, and, most importantly, ischemic heart disease. The influence of false positives and false negatives on length of stay may also be examined if the number of days is divided 'by the number of abstracts in the appropriate groupings of cells, as shown in Table 23. Four-digit lengths of stay for specific diagnoses are not consistently different from three digit. Lengths of stay based on Medicare data (including false positives) are about equally likely to be higher or lower than the corresponding basic numbers for both three and four- digit comparisons. This is also true for the IOM lengths of stay (including false negatives). With the exception of fracture neck of femur (where the IOM length-of-stay is about five days longer than either the basic or Medicare average), most differences are within a range of one day in either direction. When the entire DRG and AUTOGRP classification are used (see Table 24) it is equally difficult to detect consistent differences. The use of Medicare data to calculate diagnostic-specific admission rates may result in systematic distortions. The differences between IOM and Medicare data for diagnostic-specific lengths-of-stay are not consistent; nevertheless, they do exist. 2 The rates in Tables 21 through 24 were not adjusted to account for the small number of cases for which there were discrepancies and the Medicare records were correct. Such adjustments were made on an exploratory basis with the previous data set. The changes in the rates were minuscule and insufficient to justify the added complexity of the calculations.

OCR for page 23
53 Table 21. Influence of False Positives and Negatives on Proxy Admission Rates for Specific Diagnoses Within a Diagnosis Related Group ~ times 1, 000) Based on All Medicare Admissions in the Twenty Percent Sample Med ~ c are Basic admis sion admis signs TOM Princ ipal . . . alagnosls rate a/N Four- Three- _ digit digit rate a+c /N admit s s ions Four- Three- rate digit digit a+b/N Chronic ischemic heart . . . disease 36.2 38.0 96.7 98.5 52.2 Cerebrovascular diseases 40.1 47.1 50. 7 57. 7 71. 3 Fracture, neck of femur 14.4 19.1 15.7 20.4 22.3 Cataract 29.1 29.4 29.7 30.0 30. 7 16.4 18.5 22.2 24.3 27.4 Inguinal hernia without mention of obstruction 12.3 12.3 12. 7 12. 7 13.8 Diabetes mellitus 12.6 14.2 23.7 25.4 21.1 Hyperplasia of the prostate 18.6 18.6 21.3 21.3 22.4 Broncho pneumonia- organism not specified and pneumonia-organism and type not specified Cholel ithiasis/ cholecystitis 12.1 Intestinal obstruction without mention of hernia 5 .4 Congestive heart failure and le ft ventr icular failure Diverticulosis of the inte st ine Bronchi tis Mal ignant neoplasm of bronchus and and lung 9 .2 9 .2 11 . 6 11 .6 11 . 1 . 9.7 10.6 12.6 12.6 9.7 9.7 21.2 21.2 15.0 26.7 6.4 8.3 26.7 18.0 9.3 16.3 17.1 14.5 12.6 14.5 12.6 33.8 21 . 10.1 34.1 19.1 14.7

OCR for page 23
54 Influence of False Positives and Negatives on Proxy Admission Rates for all Diagnoses within a Diagnosis Related Group (times 1,000) Based on all Medicare Admissions in the Medicare Admissions in the Twenty Percent Sample . . Medicare Diagnosis Basic admission admission IOM related rate a/N rate a+c/N admissions group AUTOGRP AUTOGRP a+b/N . Ischemic heart disease . . . . . . . . . . .. except AMI 49.9 107.7 66.9 Cerebrovascular diseases 58.4 68.9 71. 3 Fractures 42.7 47.6 49.6 Diseases of the eye 35.9 37.2 38.0 Acute myocardial infarction 18.5 24.3 27 .4 Hernia of abdominal cavity 26.0 28.1 30.9 Diabetes mellitus 14. 2 25 .4 21 . 1 Diseases of the prostate 20.8 23.7 25.0 Pneumonia 26.3 30.5 36.9 Diseases of the gall bladder and bile duct 17.9 21.4 23. 7 Miscellaneous diseases of the intestine and peritoneum 11 .6 19.5 17. 7 Heart failure 10 .6 17.5 35.3 Enteritis, diverticula and functional dis- orders of intestine 16.0 18.5 23.9 Bronchitis 11.4 14.3 14.7 Malignant neoplasm of respiratory system 10. 3 13.6 12.0

OCR for page 23
55 Table 23. Influence of False Positives and Negatives on Average Lengths of Stay for Specific Diagnoses within a Diagnosis Related Group Based on all Medicare Admissions in the Twenty Percent Sample Basic length of stay Principal Four- Three- diagnosis __ digit digit Medicare length of stay Four- Three- IOM digit digit length of stay Chronic ischemic heart disease 10.0 10.0 10.7 10.7 1 n Cerebrovascular diseases 12.8 Fracture, neck of femur 20.8 Cataract Acute myocardial infarction 14.4 12.8 12.6 12.6 20.8 20. 1 20 .4 5.0 5.0 5.1 5.1 12.2 25.7 5.4 14.2 13.7 13.6 13.6 Inguinal hernia without mention of obstruction 7.1 7.1 7.2 7.2 7.3 Diabetes mellitus 10.9 10.6 12.2 11.9 10.7 Hyperplasia of the prostate 12.2 12.2 12.2 12.2 12.5 Bronchopneumonia- organism not specified ~ and pneumon~a-organtsm and type not specified 10.9 10.9 11.3 11. 3 10. 7 Cholelithiasis/ cholecystitis 12.8 13.2 12. 1 12.5 13.4 Intestinal obstruction without mention of hernia 12.2 12.7 13.8 14.0 11.3 Congestive heart failure and left ventricular failure 9.4 9.2 10.0 9.8 11.3 Diverticulosis of intestine 8.3 8.3 9.0 9.0 10.6 Bronchitis 7.9 7.9 8.3 8.3 8.2 Malignant neoplasm of bronchus and lung _

OCR for page 23
56 Table 24. Influence of Fat se Positives and Negatives on Average Lengths of Stay for all Diagnoses within a Diagnosis Related Group Based on all Medicare Admissions in the I_ , DlagllO S1S rel ated group Baslc leng th of stay Med ic are leng th of stay IOM leng th of stay Ischemic heart disease except AMI 9.7 10.7 10.1 Cerebrovascular diseases 12.2 12.2 12.2 Fractures 19.1 18.5 18.9 Diseases of the eye 5.1 5 .3 5.2 Acute myocard ial infarction 14.2 13.6 13.6 Hernia of abdominal cavity 9.9 9.9 9.6 Diabetes mellitus 10.6 11.9 10. 7 Diseases of the prostate 12.0 11.8 12.2 Pneumonia 10.8 11.2 10.6 Diseases of the gall bladder and bile duct 13.0 12. 7 14.4 Miscellaneous diseases of the intestine and peritoneum 11.2 12.3 11. 0 Heart failure 10.1 10.4 11.7 Enteritis, diverticula and functional dis- orders of intestine 8.1 8.7 10.2 Bronchitis 7.9 8.2 8.2 Mal ignant neoplasm of respiratory system 13.0 11.8 13. 0

OCR for page 23
57 Influence of Hospital Characteristics To gain further insight into the influence of hospital characteristics on reliability of data, selected aspects of the process by which claims information is obtained within the hospital and forwarded to the fiscal intermediary were examined. Each hospital or abstracting process characteristic was cross-tabulated by the percent of abstracts for which there were no discrepancies between the Medicare record and the IOM abstract. The effect on diagnoses was measured at the four-digit, three-digit and AUTOGRP levels of comparison; the influence on procedures was also examined. A chi-square test of significance was calculated to determine the independence of the two variables.~3] As shown in Table 25, the influence of most variables was statistically significant. Interpretation is difficult, however, because the resulting relationships were not always consistent for all dependent variables. Occasionally the relationships were statistically significant, but of inter-correlations with other ct the quality of data. The more not mean~ngful--presumably because ~ variables which more directly affe important relationships are summarized below. Unless otherwise noted the effect of AUTOGRP was the same as the three-digit comparison. Table 25. Relationships Between Hospital and Abstracting Process Characteristics and the Accuracy of Information on Diagnosis and Procedure Four-digit Characteristics diagnosis diagnosis Procedures ~ , . Personnel and Training Training of billing Billing office Same as four- Not appro- personnel where they training with no digit priate for review portions of medical record procedure records for experience = diagnosis better data Training of personnel abstracting informa- where billing uses abstracted data Data from Same as four- Same as four- physicians and digit digit RRAs are better than ARTs or others 3 Because of the instability of the weighted numbers, the chi-square was based on a re-distribution of the unweighted numbers according to the weighted percentages. A statistically significant relationship was assumed if the chance of its occurrence was less than .05.

OCR for page 23
58 Table 25 continued . . . Four-digit Three-digit Characteristics diagnosis diagnosis Procedures Abstracting Process Source of abstracted Typed discharge Same as four- Copy of face data used by billing list or copy digit except sheet or en- of face sheet = admit sheet tire record = more accurate; or entire more accurate computerized record = data; typed discharge list = least accurate discharge least accurate list = least Description of diag- Diagnostic codes Same as four- Not appro- nostic data received more accurate digit diag- priate for by billing than narrative nosis procedure description Time lapse between Significant Significant Not appro- patient discharge but not but not priate for and transfer of meaningful meaningful procedure diagnostic infor- mation to billing Time lapse between patient discharge and determination of a final diag- nosls Significant Significant Not appro- but not but not priate for meaningful meaningful procedure Submission of up- Submission of Not signifi- Not appro- dated diagnostic updated infor- cant priate for information to mation = more procedure billing office accurate data Submission of up- Submission of Same as three- Not approp- cated diagnostic updated infor- digit priate for information to mation = more procedure the fiscal inter- accurate data mediar, Definitions used in Use of Medicare Same as three- Not signif- determing princi- definition = more digit icant pal diagnosis or accurate; first procedure listed = less accurate

OCR for page 23
59 Table 25 continued . . , Four-digit Three-digit Characteristics diagnosis diagnosis Procedures Hospital Characteristics ~ . Geographic region Northeast region = Same as four- less accurate data digit Population density Not significant Non SMSA = more accurate Same as four- digit Non SMSA = more accurate Control Not significant Not signifi- Proprietary = cant more accurate; voluntary = less accurate Bed size Smaller hospi- Same as four- tals = better digit data Some as four- . . c lglt The checklist included an item intended to elicit information about the training of the person reviewing the medical record to retrieve claims data, regardless of whether the function was performed in the billing office or elsewhere. In hospitals where portions of the medical record are transmitted to the billing office, persons trained in billing office procedures but without medical records experience were associated with better data than were persons without that training. Presumably, the training would include methods for retrieving diagnostic and procedural information from the medical record. Where a discharge list or some other summary of abstracted information is used by the billing office to complete the claim form, the data were better if the abstracted information was provided by either a physician or RRA. The reliability of data across categories was less consistently influenced by the source of abstracted information used by billing. This may suggest that the care with which the information is either recorded or abstracted and the training of persons involved in those functions is more important than the actual document (typed discharge list, computerized discharge list, face sheet, etc.) In any case, when the billing office was provided with diagnostic codes, rather than narrative information, the claims data tended to be more accurate. Similarly, the data were more accurate in hospitals where up-dated diagnostic information is regularly submitted to the billing department, as well as to the fiscal intermediary. The various definitions for principal diagnosis and principal procedure used by the study hospitals were expected to influence the reliability of data. The expectation was confirmed, but the findings are perplexing. The Medicare definition for principal diagnosis was associated with more

OCR for page 23
60 accurate data, despite the fact that the field team used the UHDDS definition as the basis for comparison. It is possible, however, that hospitals which profess to use the Medicare definition do not consistently apply it. Data were least accurate when the first-listed diagnosis on the face sheet was routinely used for designating a principal diagnosis. Definitions for principal procedure were not significantly associated with the accuracy of data. The accuracy of both diagnostic and procedure data varied by geographic region. Invariably, hospitals in the Northeast region provided less ac- curate data than hospitals in the South, West, or North Central regions. Hospitals outside a Standard Metropolitan Statistical Area (SMSA) pro- vided more accurate data than those located within a SMSA, although the differences were statistically significant only for diagnoses at three- digits and for principal procedure. Arrangements for hospital control did not influence the accuracy of diagnostic data, although proprietary hospitals had more accurate data for principal procedure. Hospitals with fewer beds were found to have more accurate data for both diagnoses and procedures. In an attempt to determine the relative influence of hospital charac- teristics on reliability of data, simple and multiple regressions were performed using the characteristics as independent variables. Census region was the only independent variable which was consistently as- sociated with the accuracy of diagnostic and procedure data. For all regressions, the amount of variance explained was low, reaching a max- imum of 0.125. The analysis of hospital and billing process characteristics may be useful in instituting program changes to increase the accuracy of diagnostic and procedure data. The reader should note, however, that this information was obtained informally during visits to the study hospitals and the degree of subjectivity in the responses could not be ascertained. In addition, several of the process characteristics may be correlated within a particular hospital, even though there was very little correlation among these characteristics for all hospitals combined. Despite these limitations, it appears that billing office personnel with training in billing procedures, but no medical record experience, may provide accurate diagnostic information if accurate information is provided by the medical record department. If RRAs abstract and code the information and submit it to the billing office, the data forwarded to the fiscal intermediaries tend to be more accurate. The role of physicians in recording patient information is an important variable. In addition, the management practice of having medical record departments submit updated diagnostic information to the billing office and the fiscal intermediaries aids in increasing the accuracy of data. Of the structural characteristics, only the geographic region of the country in which hospitals are located and hospital size were significantly and consistently linked with the accuracy of data.