National Academies Press: OpenBook
« Previous: Study Methods
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 23
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 24
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 25
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 26
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 27
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 28
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 29
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 30
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 31
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 32
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 33
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 34
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 35
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 36
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 37
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 38
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 39
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 40
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 41
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 42
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 43
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 44
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 45
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 46
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 47
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 48
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 49
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 50
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 51
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 52
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 53
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 54
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 55
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 56
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 57
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 58
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 59
Suggested Citation:"Analysis." Institute of Medicine. 1977. Reliability of Medicare Hospital Discharge Records: Report of a Study. Washington, DC: The National Academies Press. doi: 10.17226/9930.
×
Page 60

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Chapter 3 ANALYSIS The analysis was intended to assess the reliability of six abstracted information items chosen for study and investigate several factors which might affect data reliability, particularly for information on principal diagnosis and procedure. The effect of data reliability on hospital utilization statistics such as diagnostic specific admission rates and lengths of stay was also examined. TOTAL FREQUENCIES OF DISCREPANCIES Table 1 shows the frequency of discrepancies between the Medicare record and the Institute of Medicine (IOM) abstract for each data item selected for study. In general, the data were highly reliable for dates of hos- pital admission and discharge and the sex of a patient. Information was less reliable for data reflecting the principal diagnosis and principal procedure and whether additional diagnoses were present. [1] When there were discrepancies in these data, information on the IOM abstract was most frequently determined to be correct. Occasionally, the data pro- vided by HCFA and the IOM field team were equally acceptable. This was particularly true for diagnostic data, where 4.6 percent of all sets of abstracts had a different principal diagnosis on each data source and "either" diagnosis was an acceptable choice. The lower level of reliability for diagnosis is of particular concern because such information may be used to reflect disease prevalence, as well as patterns of hospital care and utilization of medical services, and may play an important role in determining policy directives such as resource allocation for specific disease categories. Therefore, a more detailed analysis of the problems associated with the abstracting and coding of these data was performed. 1 A similar pattern of agreement was also found in the independent assess- ment of the field work (see Appendix F.). 23

24 Table 1. Discrepancy Between Medicare Record and IOM Abstract and the Correct Data Source for Selected Items (weighted percent) . . . Correct data source where a discrepancy exists Selected Medicare IOM items No discrepancy Record Abstract Either Neither Total Admission 99.5 0.4 0.1 - - 100.0Z date Discharge 99.3 0.4 0.3 - - 100.0 date Sex 99.4 0.4 0.2 - - 100.0 Prlnclpat diagnosis 57.2 2.3 35.7 4.6 0.2 100.0 (four-digit) Presence of 74.5 1.3 23.5 0.7 - 100.0 additional dlagnosls Principal 78.9 1.7 17.3 1.7 0.4 100.0 Procedure Unweighted N = 4745 The analysis was guided by several factors considered in the previous study and thought to influence reliability, including: · the potential inadequacies of current nomenclature, coding guidelines, and medical recording practices for definitively determining and cod- ing a principal diagnosis or principal procedure and the resultant need of abstracters to exercise some judgment which may lessen reli- ability; · the degree of coding refinement (four-digit, three-digit, or broader diagnostic classifications such as AUTOGRP); · the contribution of individual diagnoses to the overall discrepancy rates; · the contribution of the actual coding and processing of claims infor- mation by HCFA personnel; and

25 . the contribution of structural and functional factors within the hos- pital that may affect the reliability of abstracted information, in- cluding the many paths by which data from the medical records are eventually received by HCFA. The influences of these factors on data reliability were individually considered. The analysis of diagnostic information is presented before that pertaining to procedures. An examination of the content and coding of the Medicare claim form follows. The analysis concludes with discus- sions of the implications for the accuracy of utilization statistics and the relative influence of hospital characteristics on data reliability. ANALYSIS OF DIAGNOSTIC INFORMATION In analyzing diagnostic information, the reasons explaining discrepan- cies between the diagnoses coded by HCFA and the field team were first explored in hopes of eliciting general clues about potential reasons for differences. The concordance between admitting and principal diagnosis was examined next to determine whether hospitals may submit admitting diagnoses, rather than principal, to facilitate reimbursement for the Medicare claims. In both analyses all diagnoses were combined. Sub- sequent analyses were progressively less aggregated to examine the ex- tent to which particular diagnostic groupings or individual diagnoses might contribute to overall accuracy at varying levels of coding re- finement. Finally, the influence of co-morbidity was explored. The analyses of information on both diagnoses and procedures are based on comparisons between the Medicare record and the IOM abstract, assum- ing that data on the Medicare record accurately reflect information from the claim form submitted by the hospital. This assumption was also tested, and the results are presented later in this chapter. Reasons for Discrepancies l To understand the lower reliability of principal diagnosis, the reasons selected by the field teem to explain discrepancies were analyzed. Tables 2, 3, and 4 show the reasons for discrepancies according to the correct data source for diagnoses compared at the fourth digit, third digit, and classified according to the AUTOGRP system. As noted in Chapter 2, the possibility of an ordering discrepancy (a discrepancy caused by uncer- tainty over whether a diagnosis should be considered as "principal" or "other") was to be ruled out before attributing an error to coding prac- tices.

26 Table 2. Reason for Discrepancy in Principal Diagnostic Codes Compared to the Fourth Digit by Correct Data Source (weighted percent) Correct data source Reason for Medicare IOM discrepancy Record* Abstract Either Neither** Ordering-SSA definition Ordering-hospital - 20.8 5.0 list Ordering-completeness 4.4 21.4 Ordering-judgment 2.6 Ordering-other 7.8 Coding-clerical 29.2 3.5 Coding-completeness 12.7 Coding-procedure 37.9 1.4 78.2 3.8 1.6 19.4 13.7 _ Coding-judgment 2.8 0.2 14.5 Coding-other 2.6 Total 100.0Z (Percent of total (2.3) number of abstracts) 14.6 0.7 100.0 100.0 (35.7) (4.6) (0.2) ^For some abstracts a reason for discrepancy was not checked by the field team when the Medicare record was correct. Reasons for discrep- ancies were assigned to those abstracts according to their frequency when they were assigned by the field team. **The analysis of cases for which "neither" was correct is not presented because the numbers are too small. When the Medicare record was correct, coding discrepancies generally oc- curred more frequently than ordering discrepancies. This was found at all three levels of coding refinement. When the IOM abstract was correct, the frequency of ordering and coding discrepancies was relatively equal if all four digits were compared. If only three digits or AUTOGRP com- parisons were made, coding discrepancies generally decreased and ordering discrepancies assumed greater importance. When "either" data source was correct, the discrepancies were invariably related to ordering problems.

27 Table 3. Reason for Discrepancy in Principal Diagnostic Codes Compared to the Third Digit by Correct Data Source (weighted percent) - .1 ~ Correct data source Reason for discrepancy Record* Abstract Either Neither** _ ,— Ordering-SSA - 1.4' definition Ordering-hospital - 23.5 5.3 list Ordering-completeness 5.0 24.0 Ordering-judgment 3.2 1.6 79.5 Ordering-other 9.9 4.1 Coding-clerical 19.6 3.8 Coding-completeness 10.4 12.4 Coding-procedure 46.3 12.6 1.7 Coding-judgment 3.4 0.2 12.7 Coding-other 2.2 16.4 0.8 Total 100.0Z 100.0 100.0 (Percent of total (1.9) (31.6) number of abstracts) (4~4) (0.2) -xtor some abstracts a reason for discrepancy was not checked by the field team when the Medicare record was correct. Reasons for discrep- ancies were assigned to those abstracts according to their frequency when they were assigned by the field team. **The analysis of cases for which .'neither.' was correct is not presented because the numbers are too small. When the TOM abstract was correct, most ordering problems were attribut- able to two common practices within hospitals--routinely using the first listed diagnosis on the face sheet as the principal diagnosis or deter- mining a principal diagnosis based on an incomplete review of the med- ical record. The predominance of these reasons for discrepancies was independent of the level of coding refinement. Anecdotal data trans- mitted informally by medical record and billing department supervisors to the field team indicate a considerable amount of variation among hospitals with respect to the definition of principal diagnosis.

28 Table 4. Reason for Discrepancy in Principal Diagnostic Codes Compared using AUTOGRP Classifications by Correct Data Source (weighted percent) Reason for discrepancy Correct data source Medicare IOM Record* Abstract Either NeitherX* . . . . . . . . . . Ordering-SSA definition. 1.8 Ordering-hospital - 32.3 list Ordering-completeness 1.1 27.1 Ordering-judgment - 1.7 81.6 Ordering-other 14.3 5.6 Coding-clerical 9.1 2.9 Coding-completeness 1.4 7.0 Coding-procedure 70.9 8.7 Coding-judgment - O.1 8.1 Coding-other 3.2 5.4 12.8 1.6 Total 100.0Z 100.0 100.0 (Percent of total (0.7) (17.5) (2.3) (O.1 number of abstracts) *For some abstracts a reason for discrepancy was not checked by the field team when the Medicare record was correct. Reasons for discrepancies were assigned to those abstracts according to their frequency when they were assigned by the field team. '*The analysis of cases for which "neither" was correct is not presented because the numbers are too small. The actual coding of a diagnosis was more of a problem when discrepancies were analyzed at the fourth digit, than if only the first three-digits were compared or if AUTOGRP was used. For coding discrepancies where the IOM abstract was correct, the reason usually given by the field team was "coding-completeness," suggesting that a narrative was selected to describe the principal diagnosis without completely reviewing the medical record. This occurred most frequently at the four-digit comparison level. Often a code "nine" was used as the fourth-digit on the Medicare record to indicate Knot otherwise specified,'" when a more careful review of the

29 record would have yielded a more specific narrative and corresponding fourth-digit code. (For example, code 560.9 indicates intestinal ob- struction without mention of hernia due to an unspecified cause, while code 560.1 indicates intestinal obstruction without mention of hernia due to paralytic ileus.) Another common reason for discrepancy was '~coding procedure," which occurred with relatively equal frequency at the three levels of diagnostic coding refinement. This reflects a rou- tine and systematic misuse or mix-understanding of the coding system, such as relying on either the alphabetic or tabular index, rather than using both. The "coding other" reason for discrepancy also was used with relatively equal frequency regardless of the level of coding re- finement. In 50.7 percent of these 207 cases, the diagnostic code listed by BCFA was 799.9, which indicates that the claim form did not contain acceptable diagnostic information, although the field team had coded a principal diagnosis. For most of the remaining cases in this category, the field team was unable to find any diagnostic information in the hospital record similar to that found on the Medicare record, so con- sideration of alternative discrepancy options was inappropriate. Discrepancies for which the diagnostic codes on "either" the Medicare record or the IOM abstract were equally acceptable account for 4.6 per- cent of the abstracts in the study when all diagnoses are combined and compared to four-digits. The most frequent reason for this decision was "ordering judgment," indicating an honest difference of opinion in interpreting the medical record. When three-digit or the AUTOGRP com- parisons were used, the percent of abstracts for which "either" source of data was correct was 4.4 percent and 2.3 percent, respectively, and again the most frequent reason for discrepancy was '°ordering-judgment..' This may suggest that in some instances the guidelines for determining principal diagnosis are not adequately specified. It also raises the possibility that for some patients, it may be unrealistic to expect reliable determinations of "the" principal diagnosis. The number of cases for which "neither" data source was correct is suf- ficiently small that the associated reasons for discrepancies are not discussed. In general, three basic problems account for discrepancies between the diagnostic codes determined by HCFA and the IOM field team. When the IOM abstract was correct, two problems identified by the field team reflect instances where remedial action could possibly increase the level of reliability. First, a more complete review of the medical record might reduce the frequency of both ordering and coding discrep- ancies which stem from the use of incomplete information. Second, more explicitly stated hospital guidelines for recording and transmitting diagnostic information and determining principal diagnosis might help. If the diagnosis listed first on the face sheet is assumed to be "prin- cipal," persons providing that information could be trained to assure that the assumption is correct. The third problem relates to abstracts where "either" diagnostic code is acceptable. In these cases, corrective action is difficult to identify, since the discrepancies stem from pro- fessional differences in interpreting a medical record. Although this

30 accounts for only a small percent of the abstracts, it nonetheless is important, since it identifies an area in which the determination of a single, reliable, principal diagnosis may not be feasible. Admitting vs. Principal Diagnosis It has been hypothesized that hospitals' need for reimbursement may cause them to forward claims to fiscal intermediaries containing an admitting diagnosis, rather than a more carefully established principal diagnosis. This likelihood was strengthened by the finding in the preceding section that many discrepancies between the Medicare record and TOM abstract stemmed from an incomplete review of the medical record by hospital per- sonnel responsible for determining the principal diagnosis. To explore this possibility, the field team determined an admitting diagnosis for each case, based only on information contained in the face sheet of the medical record, history and physical reports, and admitting or emergency room notes. This was compared with the principal diagnosis, based on a careful examination of the entire record. Table 5 indicates that for approximately sixty percent of the abstracts, the admitting diagnosis (determined retrospectively by the field team) is an accurate reflection of the principal diagnosis established after study to be chiefly res- ponsible for causing the hospital admission. When the diagnoses were different, coding refinement did not appear to be influential. Rather, the admitting diagnosis usually reflected symptoms or preliminary find- ings; after additional testing and medical investigation a more precise and different principal diagnosis was determined. Table 5. Discrepancies Between the Institute of Medicine Admitting and Principal Diagnoses and Reasons for Discrepancy at Varying Levels of Coding Refinement (weighted percent) No Complete- Refine- Invests discrepancy ness ment gation Other Total Four digit 58.4 0.5 4.7 33.2 3.2 100.0% Three digit 61.7 0.5 3.2 31.9 2.7 100.0 AUTOGRP class- 80.8 0.2 0.9 16.9 1.2 100.0 ification Because the more extensive medical investigation led to a considerable change in admitting diagnoses for about thirty-three percent of the cases, it appeared less likely that HCFA's principal diagnosis might in fact closely approximate an admitting diagnosis. This was confirmed when only about forty percent of HCFA's principal diagnoses agreed with the IOM's admitting diagnoses compared to four digits and about forty- six percent at three digits. When this analysis was limited to those

31 discharges where there was a discrepancy between the principal diagno- sis on the Medicare record and the IOM abstract and the abstract was correct, only about ten percent of the HCFA's principal diagnoses agreed with the IOM's admitting diagnoses compared to both three and four digits. Influence of Diagnostic Groupings The data presented in Table 1 show the frequency of discrepancies all principal diagnoses combined and compared to the fourth-digit. Tables 2, 3, and 4 reveal a decrease in coding errors when less specific diagnostic comparisons are used. In this section 5 the influence of dif- fering levels of diagnostic groupings is explored in more detail. for For most of the fifteen diagnostic groups under study, three-digit or AUTOGRP analyses may be acceptable for determining basic utilization statistics, such as admission rates. As described in Chapter 2, the AUTOGRP categories constituted the basis for drawing the sample of abstracts. Within each Diagnosis Related Group (DRG), specific diag- nostic sub-groups were identified because of their importance for the Medicare population and/or their inclusion in the previous re-abstract- ing study. Residual diagnostic sub-groups included all diagnoses in the DRGs except the specific diagnoses. Therefore, the reliability of data was examined for the entire DRGs combined, the specific diagnoses, and the residual diagnoses, using AUTOGRP, three-digit, and four-digit comparisons. The accuracy of data was not influenced greatly by aggregating the diag- nostic groups according to their reason for inclusion in the sample-- specific or residual sub-categories (see Table 6~. However, the level of reliability for all categories of diagnoses does vary according to the level of coding refinement, with increased reliability using AUTOGRP or comparing only three digits. For all diagnostic categories, the AUTOGRP comparisons were more reliable. The increase in reliability must be balanced against the loss of precision in the information, however. The percent of abstracts where the data on "either" the Medicare record or TOM abstract are equally acceptable decreases only slightly when AUTOGRP is used. Diagnostic Specific Discrepancies Table 7 shows the frequency of discrepancy and the correct data source for the individual specific diagnoses (the specific diagnostic sub-groups within the DRGs, many of which conforms to the "target" diagnoses in the previous study). The diagnoses with higher levels of reliability include cataract, inguinal hernia without obstruction, hyperplasia of the pros- tate, diverticulosis of intestine, and bronchitis. The categories with less accurate data include chronic ischemic heart disease, cerebrovascular diseases, diabetes mellitus, intestinal obstruction without mention of hernia, and congestive heart failure. The percent of cases where "either" data source was correct is highest for chronic ischemic heart disease, diabetes mellitus, and bronchopneumonia and unspecified pneumonia.

32 Table 6. Discrepancy Between the Medicare Record and the IOM Abstract at Differing Levels of Aggregating Diagnoses and the Correct Data Source Where a Discrepancy Exists (weighted percent) Level of Correct data source where . . . ~ . Aggrega- a discrepancy exists ~ . . . . Lion of No Medicare IOM diagnoses discrepancy record abstract Either Neither Total All diagnoses* AUTOGRP 71.7 1.2 23.3 3.5 0.3 100.0% Three-digit 68.2 1.5 25.9 4.1 0.3 100.0 Four-digit 61.9 2.4 31.2 4.2 0.3 100.0 Specific sub- categories AUTOGRP 71.3 1.2 23.7 3.5 0.3 100.0 Three-digit 67.8 1.5 26.2 4.2 0.3 100.0 Four-digit 62.5 2.0 30.8 4.4 0.3 100.0 Residual sub- categories AUTOGRP 73.9 1.5 21.6 3.0 - 100.0 Three-digit 66.3 3.6 27.1 3.0 - 100.0 Four-digit 60.0 4.2 32.5 3.3 - 100.0 *Includes only those abstracts in the first fifteen DRG's listed in Chapter 2. The sixteenth category was created the representativeness of the sample. It is not an actual DRG and had to be excluded from the AUTOGRP comparisons. It was excluded from the other comparisons as well in order to maintain a common denominator throughout the table. Therefore, the percents are different than in Table 1. Drimarilv to enhance

33 Table 7. Weighted Frequency of Discrepancy Between the Medicare Record and IOM Abstract and the Correct Data Source Where a Discrepancy Exists (weighted percent) . . . . . . Weighted Correct data source where percent a discrepancy exists of all abstracts Percent that each with no diagnosis discre- ~ ~ ~ ~ ° represents pancy ~ ~ ~ ~ ~ , . . Principal . . c Diagnoses on Medicare record Chronic ischemic heart disease Cerebrovascular diseases Fracture, neck of femur Cataract Acute myocardial infarction 9.8 6.9 2.0 3.0 2.4 36.8 4.0 58.5 3.5 70.5 3.0 97.3 0.2 67.3 50.3 7.6 1.3 100.0Z 33.8 4.2 - 100.0 26.5 2.5 1.0 28.8 2.9 100.0 100.0 100.0 Inguinal hernia without mention of obstruction 1. 3 96 . 7 - 2 . 7 0 . 6 - 100 .0 Diabetes mellitus 2.5 49.7 0.8 43.8 5.7 - 100.0 Hyperplasia of the prostate 2.1 87.1 0.4 8.0 4.5 - 100.0 Bronchopneumonia- organism not specified and pneumonia-organism and type not specified 2.8 Cholelithiasis/ cholecystitis 2.0 Intestinal obstruction with- out mention of hernia Congestive heart failure and left ventricular failure 1.7 Diverticulosis of intestine Bronchitis 1.5 0.9 58.4 86.5 0.7 89.8 75.9 - 18.2 5.9 - 100.0 62.8 1.2 34.0 1.7 0.3 100.0 58.0 2.1 36 .2 ~ 7 - 100-0 0.1 36.3 5.2 - - 9.1 4.4 ~ - 8.8 1.4 - 100.0 Malignant neoplasm of bronchus and lung 1. 2 79.9 - 17.7 2.4 - 100.0 All else 59.2 52.5 2.2 40.0 5.1 0.2 100.0

34 Table 8 displays the percent of abstracts with no discrepancy by diag- nosis at differing levels of coding refinement. The table is arranged so that the first column of figures shows the percent of abstracts with no discrepancy for the entire DRG, using AUTOGRP codes. The following columns show the percent with no discrepancy for the specific diagnostic sub-group within each DRG, compared with AUTOGRP codes, three-digits, and four-digits. All data in this table are confined to either entire DRGs or specific diagnoses within them. Related data for the residual diag- noses are in Appendix G. For almost all specific diagnostic sub-groups, three-digit comparisons are more reliable than four-digit. The changes are most pronounced for diagnoses that show a greater number of four-digit codes (see Figure 1 in Chapter 2), especially fracture neck of femur, intestinal obstruction without mention of hernia, and cholelithiasis/cholecystitis. For some categories, the percents increase as one moves from three-digit to AUTOGRP--particularly for cerebrovascular diseases. For most of the specific diagnoses, however, the AUTOGRP comparison is the same as the three-digit match. Table 8. Abstracts With No Discrepancy Between the Medicare Record and TOM Abstract by Sub-categories of Diagnoses at Differing Levels of Coding Refinement (weighted percent) Principal AUTOGRP diagnosis: AUTOGRP digit digit diagnosis: classi- Specific classi- compar- compar- Entire DRG fication sub-category fication ison ison Ischemic heart disease except AMI 40.1 Cerebrovascular diseases 84.7 Fractures 87 .7 Diseases of the eye 95.0 Acute myocardial infarction 76.1 Hernia of abdominal cavity 89.8 Diabetes mellitus 56.2 Chronic ischemic heart disease Cerebrovascular diseases Fracture, neck of femur Cataract 97.9 Acute myocardial infarction 76.1 Inguinal hernia without mention of obstruction Diabetes mellitus 38.6 38.6 36.% 84 . 7 68 .4 58 . 5 93.4 93.4 70.6 97.9 97.4 76 .1 67 . 3 96.7 96.7 96.7 56.2 56 '.2 49.7

35 Table 8. Continued . Principal Three- Four- Principal AUTOGRP diagnosis: AUTOGRP digit digit diagnosis: classi- Specific classi- compar- compar- Entire DRG fication sub-category fication ison ison .. . .. . Diseases of the prostate 85.0 Pneumonia 77.8 Diseases of the gall bladder and bile duct 79.9 Miscellaneous diseases of intestine and peritoneal 58.6 Heart failure 58.9 Enteritis, diverticula and functional disorders of intestine 86.5 Bronchitis 95.7 Malignant neoplasm of respiratory system 74.9 All else * Hyperplasia of the prostate Bronchopneumonia- organism not specified 87.1 87.1 87.1 and pneumonia- organism and type not specified 80.5 75.9 75.9 Cholelithiasis/ cholecystitis Intestinal obstruc- tion without mention of hernia Congestive heart failure and left ventricular failure 84.4 76.3 62.8 68.8 68.8 58.1 60.6 61.0 58.5 Diverticulosis of intestine 86.9 86.9 86.5 Bronchitis Malignant neoplasm of bronchus and lung All else 95.7 89.8 89.8 79.9 79.9 79.9 * 74.3 53.5 *AUTOGRP classification is not ~ . . .

36 The reasons for discrepancy were re-ex~mined to determine whether special problems were associated with particular diagnoses (see Table 93. Be- cause of the small number of abstracts, the reasons were grouped into those associated with determining principal diagnosis (ordering) and those associated with assigning a code number after the principal diag- nosis had been selected (coding). When the diagnosis on "either'' ab- stract was equally acceptable, most problems were related to the order- ing of diagnosis. Where the TOM abstract was determined to be correct, most diagnoses with high discrepancy rates had ordering problems, par- ticularly chronic ischemic heart disease and diabetes which often have associated co-morbidity. Ordering problems were, also associated with cataract and diverticulosis of the intestine, however, even though the data for these diagnoses were quite reliable. Diagnoses whose reli- ability improved with less specific coding tended to have coding discrepancies. Influence of Multiple Diagnoses Since Medicare patients frequently have multiple chronic conditions, this may complicate the determination of a principal diagnosis for any particular hospitalization. Therefore, the influence of the presence of additional diagnoses on the reliability of principal diagnosis was examined. The, presence of additional diagnoses was determined by the field team using the guidelines listed in the Specific Instructions (see Appendix D). A reconciliation was conducted when the data sources were not in agreement. The data fray this reconciliation have been used in the table presented below. Table 10 shows the influence of additional diagnoses on the reliability of principal diagnosis compared to four-digits. For most diagnoses that had a relatively low level of reliability in Table 8, the percent of abstracts with no discrepancy increased if the analysis was confined to those with no additional diagnoses. This was particularly evident for chronic ischemic heart disease, acute myocardial infarction, diabetes, congestive heart failure, and intestinal obstruction without mention of hernia. Co-morbidity also shows some, effect on hyperplasia of the prostate, bronchopneumonia and pneumonia unspecified, and diverticulosis of the intestine. Since the presence of additional diagnoses influences the accuracy of data, adjustments in analysis might be made if the fact of co-moribidity were accurately noted on the Medicare claim form. As shown in Table 1, this occurs for only 74.5 percent of the discharges. The reasons for discrepancies between the field team and the Medicare record on this item show that in some instances the errors stem from procedures with- in the hospitals, which require that only one diagnosis be submitted to HCFA--the "hospital definition" response (see Table 11~. However, for most discrepancies, more accurate information would have been obtained if a more complete review of the medical record had been conducted.

37 Table 9. Reasons for Discrepancy by Principal Diagnosis and Correct Source of Data, Bared on Four-Digit Comparisons (weighted __ percent) ~ _,_ Correct data source . I Principal IOM abstract Either ~ ~ .' . . diagnosis Unweighted Unweighted on Medicare number of number of record Ordering Coding abstracts Ordering Coding abstracts Chronic ischemic heart disease 81.0 19.0 178 100.0 - 22 Cerebrovascular diseases 46 .8 53 .2 115 66 .7 33 .3 14 Fracture, neck of femur 20.0 80.0 47 66 . 7 33 . 3 1 Cataract 55.5 44.5 8 Acute myocardial infarction Inguinal hernia without mention of obstruction Diabetes mellitus 51.7 48.3 66 40.0 60.0 8 85.0 15.0 97 Hyperplasia of the prostate 45 .5 54.5 18 '4 100 .0 - 8 100.0 1 66.7 33 .3 9 100 .0 - 8 Bronchopneumonia-organism not specified and pneumonia- organism and type not specified 43 . 5 56 . 5 41 - 65 .0 35 . 0 9 Cholelithiasis/ cholecystitis 39.0 61.0 55 Intestinal obstruction without mention of hernia 30.0 70.0 27 Congestive heart failure and left ventricular failure 53.5 46.5 56 100 .0 - 3 100 .0 - 5 100.0 - 10 Diverticulosis of intestine 87 .5 12 .5 13 100 .0 - 6 Bronchitis 42.9 57.1 7 Malignant neoplasm of bronchus and lung 57 .6 42.4 22 50.0 50.0 3 100.0 - 4 . . . .

38 Table 10. Frequency of Agreement Between the Medicare Record and IOM Abstract for Principal Diagnosis Coded to the Fourth Digit and Categorized by the Presence of Additional Diag- ~ __ Wi th add i t ion al Princ ipal diagnoses d i agno s i s on To t al Numb er of To t al Numb er o f Med ic are unweighted Percent unweighted Percent record abstracts agreement abstracts agreement Chronic ischemic heart Without add i t tonal d iagno se s disease 331 35.5 21 57.9 Cerebrovascular diseases 269 57.7 S1 62.8 Fracture, neck of femur 136 71 .1 70 69.1 Cataract 115 96.2 127 98. 7 Acute myocardial infarction 185 63.1 42 84.9 Inguinal hernia without mention of obstruction 79 94.1 61 99 .7 Diabetes mellitus 211 47 .2 13 90.6 Hyperplasia of the prostate 157 84.0 72 95 .9 Bronc ho pneumoni a-o rg ant sm not specified and pneumonia- organism and type not specified 177 74.6 32 81 .0 Cholelithiasis/cholecystitis 139 62.3 45 64. 5 Intestinal obstruction without mention of hernia 70 SO .6 27 86 .6 Congestive heart failure and left ventricular failure 137 53 . 1 18 87 .2 Diverticulosis of intestine 109 85.5 29 92.0 Bronchitis 56 93.8 10 72.8 Mal ignant neoplasm of bronchus and lung 103 81 . 7 35 74 .5

39 Table 11. Reason for Discrepancy for Presence of Additional Diagnoses by Correct Data Source (weighted Percent) Reason for discrepancy I_ Record Abstract Either Neither Completeness 72.6 86.4 13.4 Hospital definition 3.0* 7.7 ~ Importance 24.4 5.9 86.6 100.0% (Percent of total number of abstracts (1.3) 100.0 100.0 (23.5) (0.7) (~) ~—. ~ selected in this instance. In summary, the analysis of diagnostic information suggests that the Medicare record contains correct data for at least 59.5 percent of the cases when codes are compared to four digits and 63.8 percent with three-digit comparisons (see Table 12~. These figures can be adjusted upwards, depending on assumptions about the few cases where the correct data source could not be determined. If one examines the discrepant abstracts for which a correct data source could be determined (either Medicare or IOM) and applies the appropriate percents to the cases for which a correct data source could not be determined, the fol- lowing conclusions are reached: with four-digit codes the Medicare records are correct in 59.8 percent of the cases; with three-digit com- parisons the corresponding figures is 64.1 percent. Alternatively, one might assume that the Medicare and IOM data are equally likely to be correct. All the "Indetenminates" could then be added to the column in which the Medicare record was correct and the resulting percents would be 64.1 with four-digit comparisons and 68.2 with three-digit comparisons. In any case, individual diagnoses contribute to the overall levels of accuracy. Those with higher rates of discrepancies include chronic ischemic heart disease, cerebrovascular diseases, acute myocardial infarction, and diabetes. The discrepancies associated with these diagnoses tend to reflect difficulty in determining the principal diagnosis (an ordering discrepancy); frequently the patients had multiple diagnoses. There does not appear to be a systematic bias within the hospitals to submit an admitting diagnosis on the claim form in lieu of a more carefully established principal diagnosis.

40 Table 12. Summary Table: Accuracy of Principal Diagnoses on Medicare ~ _ Level of comparison Correct* Incorrect** Indeterminate*** Total . . . . . . . . . . . Four-digit 59.5 35.9 4.6 100.0% Three-digit 63.8 31.8 4.4 100.0 ^-This column contains all cases for which there were no discrepancies and those for which there was a discrepancy, but the Medicare record was determined to be correct (see Table 19. **This column contains all cases for which there were discrepancies and the IOM abstract or "neither'' was determined to be correct (see Table 1~. ***This column contains all cases for which there were discrepancies, but it was not possible to state with certainty which data source was correct and the cases were therefore assigned to the "either" category. ANALYSIS OF PRINCIPAL PROCEDURES The analysis of information on principal procedures was similar to that for diagnoses. Table 1 showed that the Medicare record and IOM abstract agreed on principal procedure for 78.9 percent of the cases. The reasons for discrepancy are shown in Table 13. When discrepancies occurred, the IOM abstract was usually determined to be correct. Most problems related to the coding, rather than ordering, of procedures. The reason most frequently selected to explain discrep- ancies was "coding-completeness," indicating that reliability might be improved if more care were taken in reviewing the medical record. Other reasons included: "coding-procedure," describing a systematic misuse or misunderstanding of the coding system; and "coding-importance," reflect- ing a difference of opinion about the importance of a procedure listed by the hospital and subsequently by Medicare. For example, the hospi- tal billing office may have listed a transfusion, while the IOM field team may not have regarded this as a principal procedure in accord with UHDDS. Ordering problems occurred less frequently than coding errors. When they were noted and the IOM abstract was correct, the specific reason most frequently selected was "ordering-completeness." Thus, an incomplete review of the medical record accounted for about forty per- cent of all discrepancies when the IOM abstract was correct. 1

Table 13. Reason for Discrepancy in Principal Procedure by Correct Data Source (weighted percent) . Correct data source Reason for d~E25——y Ordering-SSA definition Record Abstract Either Neither* 5.1** 1.3 Ordering-hospital - 2.5 1.2 list Ordering-completeness 8.4 Ordering-judgment 1.1 Ordering-other Ordering-dependent Coding-clerical Coding-completeness Coding-procedure Coding-importance Coding-judgment Coding-other 10.4 13.8 3.0 9.9 1.3 29.7 14.7 0.7 10.1 1.2 0.4 2.7 4.3 3.8 _ 33.6 28.1 18.8 3.0 1.2 11.5 56.1 - 0.2 21.9 Total 100.0% 100.0 (1.7) (Percent of total number of abstracts) 100.0 (17.3) (1.7) (0.4) *The analysis of cases for which "neither'! was correct is not presented because the numbers are too small. **The "ordering--SSA definition" reason for discrepancy was inappropri- ately selected in this instance. The examination of information on procedures is complicated by the un- certainty about which procedures should be included on the bill, in- adequacies of the CPT classification system, and the frequency with which HCFA receives bills containing either no procedural information or illegible information. An attempt was made to categorize the data base according to the status of procedural information and then to ex- amine the concordance between the field team's work and information on the Medicare record.

42 Table 14 shows that on most Medicare records (2,990 of 4, 745, or about sixty-three percent), no procedure is coded--presumably because no procedure was performed and the hospital billing office did not sub- mit this information to the intermediary. me frequency of discrep- ancy is af fee ted when this fact is taken into account . When the analysis is confined to those cases for which the Medicare record did not include, a procedure code, there was agreement with the IOM abstract about ninety percent of the time. Alternatively, for about ten percent of those cases the field team thought that a principal pro- ce.dure should have been coded, but was not. The rate of agreement de- creased to fifty-seven percent when the analysis was confined to only those cases for which a procedure code appeared on the Medicare record. The, reasons for discrepancies when there, was a code on the Medicare record primarily reflected coding problems, rather than ordering . A1- though failure to follow established coding procedures in recording the narrative on which the code is based was important, the major reason for discrepancy again stemmed from an incomplete review of the medical re- cord . In addition, some discrepancies may occur because Medicare per- sonnel attempt to code everything that appears in the principal pro- cedure position on the claim form. In some cases the field team may have felt that a particular procedure should be listed as principal in accord with UHDDS and wrote the procedure on the abstract form. But because there, was no directly related CPT code in the manual, the field team would not code it . For the same cases, however, Medicare coders would "force," the procedure into a code (based on anatomical classifi- cation or a similar procedure for which CPT included a code) or con- struct what is re ferred to as an "X-code" or use a master code . Re- gard less, thi s would lead to di screpanc ie s be tween the Med ic are record and IOM ab stract . Examples of procedures that were not coded by the field team because of limitations of CPT are found in Appendix M. Table 14. Discrepancy Between Medicare Record and IOM Ab stract and the Correct Data Source for Selected Categories of Procedures as Coded~by SSA- (weighted percent) No discre.- Med ic are IOM =; ~ ~ ~~: ~ ~ _ N~L =L r~L All procedures 78.9 1.7 17.3 1.7 0.4 100.0% ~ n=4 745 ~ No procedure 89.7 - 8.9 coded ( n=2990 ) Procedure coded 56.6 5.2 34.7 using CPT (n=1754) 1.4 - 100.0 2.3 1.2 100.0

43 In summary, for about sixty-three percent of the Medicare records, no principal procedure is coded, presumably because no procedure was per- formed and/or the hospital billing office did not submit this informa- tion. When principal procedure is coded, there was agreement between the IOM abstract and the Medicare record in about fifty-seven percent of the cases. The reasons for discrepancy usually stemmed from coding problems, rather than ordering, and the most frequent reason was the failure to adequately review the medical record before recording a narrative on which the code for principal procedure was based. ANALYSIS OF CLAIMS INFORMATION As noted earlier, the analyses completed to this point are based on comparisons between the Medicare record and the IOM abstract, assuming that data on the Medicare record accurately reflect information from the claim form submitted by the hospital to the fiscal intermediary and, eventually, to HCFA. It is possible that the information sub- mitted on the claim form is not an accurate reflection of the patient's condition as determined by the field team using UHDDS definitions. In such cases, one would expect that the information on the Medicare re- cord would not agree with that contained on the IOM abstract. On the other hand, the information submitted on the claim form may accurately reflect the patient's condition and agree with the IOM abstract, but still differ from that contained on the Medicare record--either be- cause of a special Medicare coding guideline or because of a coding error. To help determine the relative influence of these possibilities on the quality of data, the extent of agreement between information coded on the IOM abstract and that contained on the hospital copy of the-Medicare claim form was analyzed. Whenever the field team discovered a discrepancy between the Medicare record and the IOM abstract in which the former was not correct, the hospital's copy of the appropriate claim was consulted. Narrative information from the claim was copied on the abstract form and the first listed diagnosis or related procedure was coded, since this is the customary procedure for Medicare coders (there are exceptions, as will be noted later). Codes from the initial abstracting of the medical record and the item listed on the claim form (both assigned by the field team) were compared to determine whether or not the in- formation submitted by the hospital on the claim form accurately re- flected the patient's condition. The results of these comparisons are presented below, beginning with principal diagnosis and then principal procedure. Principal Diagnosis Table 15 shows that for about seventy percent of the cases for which discrepancies were found on principal diagnosis, the information sub- mitted by the hospital billing office to the intermediary did not ac- curately reflect the patient's condition. Of those cases for which

44 Table 15. Agreement Between Principal Diagnosis Coded from the Claim Form and the IOM Abstract for Discrepant Medicare Records (weighted percent) Agree Disagree Total Four-digit (n = 1402) Three-digit (n = 1244) 29.6 70.4 100.0% 29.4 70.6 100.0Z the information was incorrect, about seventy-five percent of the prin- cipal diagnoses on the Medicare record agreed with the first-listed diagnosis on the claim form. In other words, even though the infor- mation submitted by the hospital was incorrect, it was accurately coded by Medicare coders. For the remaining twenty-five percent, the Medi- care record did not agree with the first-listed diagnosis on the claim form. The data gathered by the field team did not permit a direct assessment of the frequency with which discrepancies might stem from either the correct application of a special Medicare coding guideline that required coding something other than the first-listed item, or from an error. Therefore, information from all cases with discrepancies on principal diagnosis between the Medicare record and IOM abstract (where the field team had obtained a copy of the appropriate claim form) was submitted to senior HCFA RRAs to determine the accuracy of the HCFA coding func- tion. They were asked to code the information and also to indicate whether they had simply coded the first-listed item or had applied a special coding guideline. It might be noted that this exercise approxi- mates a reliability assessment. But because different, and more ex- perienced coders performed the re-coding, it is not a test of reli- ability (repeatability) in the customary sense. The results of the re-coding of diagnostic information are presented in Tables 16 and 17. They indicate that there is variability in the Medi- care coding function, but do not lead to firm conclusions about the reasons for variability. More specifically, with four-digit compari- sons where the hospital claim form accurately reflected the patient's condition, the Medicare re-code agreed with the claim form in 41.8 per- cent of the cases. Since none of the initial Medicare codes reflected the claims information for these cases, this suggests that the initial Medicare record may have contained erroneous information introduced by Medicare coders. For the remaining 58.2 percent of the cases where the claims data were correct, the Medicare re-code did not agree with the first-listed diagnosis on the claim. As expected, special guide- lines were applied in 51.3 percent of those cases. For the remaining 48.7 percent, the first-listed diagnosis was coded, but the discrepancy persisted.

45 Table 16. Reliability of Medicare Coding of Four-Digit Principal Diagnosis for Cases Where There Were Discrepancies Between Medicare ~ care and IOM Abstract Accuracy status and coding method _ with first-listed diagnosis on claim form . . . . . Re-coding for IOM study , . . . Dis- Initial coding Dis- Aaree a~ree Total . IOM abstract = claim first-listed (claim reflects patient condition) n = 344 First-listed Special convention Totals IOM abstract ~ claim first-listed (claim does not reflect patient condition) n = 1058 0% 0Z 0Z First-listed 74.5 25.5 100.0 Special convention # Total 54.8 45.2 100.0 (82.2) (48.7) (62.7) 19.9 8C (17.8) (51 .1 100.0 .3) (37.3) 41.8 58.2 100.0% (100.0) (100.0) (100.0%) 82.7 17.3 100.0 (70.5) (38.7) (61.8) # # 74.5 25.5 100.0% 55.8 44.2 100.0 (29.5) (61.3) (38.2) 72.5 27.5 100.0% (100.0) (100.0) (100.0%) totals for the initial coding are automatically zero. #The absence of this information created the need for the re-coding ex- ercise reported in the right-hand side of the table. Greater consistency is found between the Medicare initial code and re- code for those cases where the hospital claim did not accurately reflect the patient's condition. The overall levels of agreement between the Medicare code and the claim form were 74.5 percent for the original records and 72.5 percent for the re-codes. As expected, conventions were used for 61.3 percent of the cases in which the two did not agree. The residual 38.7 percent, for which the first-listed diagnosis was coded

46 but the discrepancy remained, may be regarded as error, assuming the IOM field team was correct. One may hypothesize that the generally higher consistency of Medicare coding of cases in which the claims in- formation was incorrect suggests that a rather simplistic, but errone- ous narrative was submitted by the hospital and was easily coded by the Medicare coders. On the contrary, the smaller number of cases for which discrepancies between Medicare and IOM were detected, but the claims in- formation was correct, may have been more complicated cases for which the claims information was less straightforward. Similar comparisons for three digit diagnostic comparisons are presented in Table 17. When the claims data were accurate, the level of agreement between the Medicare re-code and claim form increases from 41.8 to 62.7 percent. Otherwise, the findings resemble those for four-digit compari- sons. In general, a portion of the discrepancies between the Medicare record and the first-listed item on the claim form stems from the appropriate application of special Medicare coding guidelines. However, there is an unexplained residual that probably reflects some error. There is de- finitely variability among HCFA coders. Nevertheless, the major factor contributing to discrepancies appears to be the failure of hospitals to provide accurate billing information. Table 17. Reliability of Medicare Coding of Three-Digit Principal Diagnosis for Cases Where There Were Discrepancies Between Medicare Record and IOM Abstract Percent of cases in which Medicare code agrees with first-listed diagnosis on claim form Re-coding for IOM study Dis- Agree agree Total Accuracy status Initial coding Dls- and coding method Agree agree Total IOM abstract = claim first-listed (claim reflects patient condition) n = 299 First-listed 65.9 34.1 100.0 (84.2) (42.0) (49.0) Special convention Total* 42.0 58.0 100.0 (15.8) (58.0) (51.0) 62.7 37.3 100.0 (100.0) (100.0) (100.0

4_ 7 Table 17. Continued . ... . . .. ~ Percent of cases in which Medicare code agrees with first-listed diagnosis on claim form ~ . . .. . Re-cod~ng Initial coding for IOM study Dis- Dis- Agreed agree Total Agree agree Total . . . . Accuracy status _ _ IOM abstract ~ claim first-listed (claim does not reflect patient condition) n = 945 First-listed 76.2 Special convention # Total 23.8 100.0% 84.3 15.7 100.0 (69.7) (59.1) (61.8) # # 76.2 23.8 100.0% 59.1 40.9 100.0 (30.3) (40.9) (38.2) 74.6 25.4 100.0% (100.0) (100.0) (100.0%) Prince only cases worn discrepancies were submitted tor re-cod~ng, the totals for the initial coding are automatically zero. #The absence of this information created the need for the re-coding exercise reported in the right-hand side of the table. Princinal Procedure Table 18 shows that for about forty-three percent of the cases for which discrepancies were found on principal procedure, the information sub- mitted by the hospital billing office to the intermediary did not ac- curately reflect the principal procedure perfo~-~ed during the hospital stay, as determined by the field term. This level of inaccuracy is in- creased to 55.2 percent when the analysis is limited to those cases where the Medicare record indicated by the use of a OPT code that a surgical procedure was performed. Of those cases for which the information was incorrect, about seventy- six percent of the principal procedures on the Medicare record agreed, with the data on the claim form coded by the field team. For the re- maining twenty-four percent, the Medicare record did not agree with the claim form data, presumably because of the appropriate application of a special HCFA coding guideline or because of a mistake. As with diagnoses, the data gathered by the field team did not permit a direct assessment of the frequency with which discrepancies might stem from either the correct application of a Medicare coding guideline or

48 Table 18. Agreement Between Principal Procedure Coded from the Claim Form and the IOM Abstract for Discrepant Medicare Records (weighted percent) · 1 ' Ire ' Agree Disagree Total All procedures (n = 805) 56.7 43.3 100.0% No procedure coded (n = 266) 80.3 19.7 100.0 Procedure coded using CPT (n = 539) 44.8 55.2 100.0 an error. To ascertain this, information from all cases with discrep- ancies on principal procedure between the Medicare record and the IOM abstract (where the field team had obtained a copy of the appropriate claim forms) was submitted to senior HCFA RRAs to determine the re- liability of the HCFA coding function. The results of the re-coding of information on procedures (see Table 19) show a variability in the Medicare coding function similar to that found with diagnoses. For cases where the hospital claim form accurately reflected the procedure performed during the hospital stay, the Medi- care re-code agreed with the claim form in 60.4 percent of the cases. Since none of the initial Medicare codes reflected claims information for the cases, this suggests that erroneous information on the initial Medicare record may have been introduced by the Medicare coders. For the remaining 39.6 percent of the cases where the hospital claim data were accurate, the Medicare re-code did not agree with the first-listed procedure on the claim. Special coding guidelines were applied to only 39.2 percent of those cases. For the remaining 60.8 percent, the first- listed procedure was coded, but the discrepancy remained. Where the hospital claim did not accurately reflect the procedure per- formed during the hospital stay, agreement between the Medicare code and the claim form reached 75.8 percent for the original records and 65.6 for the re-codes. As expected, special conventions were used in about sixty-one percent of the cases where the re-code and the first- listed claim item did not agree. The residual 39.5 percent may be regarded as error, assuming the IOM field team was correct. This examination of data on procedures confirms the statement made previously regarding the accuracy of diagnostic information. Some discrepancies stem frog the correct use of special Medicare coding guidelines. There is also variability among Medicare coders and probably a certain amount of error. Nevertheless, the major factor influencing the accuracy of diagnostic and procedure data is the failure of hospitals to provide accurate billing information.

49 Table 19. Reliability of Medicare Coding of Principal Procedure for Cases Where There Were Discrepancies Between Medicare Record and IOM abstract Percent of cases In which with principal procedure on claim form _ ~ Re-coding Initial Coding for IOM study . . . Accuracy status Dabs- Dis- and coding method Agree agree Total Agree agree Total ~ _ . ~ . IOM abstract = claim first-listed ~ claim re f lec t s pat tent care n = 205 First-listed 67.0 33.0 100.0 (80.8) (60.8) (72.9) Special convention 42.8 57 .2 100.0 (19.2) (39.2) (27.1) Tot al* 0% 0Z 0% 60 .4 39 .6 100 .0 (100.0) (100.0) (100.0) ION ab s t ract ~ c 1 aim first-listed (claim does not ref lect pat lent care ) n = 600 First-listed 75 .8 24 .2 100 .0 78 .2 21 .8 100 .0 (74.2) (39.5) (62.2) Special convention # # # 44.9 55 .1 100.0 (25.8) (60.5) (37.8) Total 75.8 24.2 100.0% 65.6 34.4 100.07 (100.0) (100.0) (100.0) *Since only cases with discrepancies were submitted for re-coding, totals for the initial coding are automatically zero. #The ab sence of this information created the need for the re-coding exercise reported in the right-hand side of the tab le .

50 INFLUENCE OF DIAGNOSTIC DATA RELIABILITY ON UT ILIZATION STATISTICS The analyses of diagnostic information presented to this point are based on cases for which a specific diagnosis was listed on the Medicare record as principal and the field team either agreed or disagreed with that de- te'~ination. If there was a disagreement, the diagnosis on the Medicare record may be regarded as a false positive. However, there may also be cases for which the same specific diagnosis should have been listed as principal, but was not. These cases may be regarded as false negatives. The sampling plan permits an estimate of the extent to which both types of errors occur. More importantly, their influence on approximations of admission rates and lengths of stay can be explored. Table 20 helps to explain the methods for calculating these estimates. Table 20. Calculation of Net and Gross Difference Rates in Designation of Principal Diagnosis IOM abstracts coded as principal Medicare record coded as principal Specific diagnosis Other Total Specific . ~ C .lagllOS IS Other a b a + b c d c ~ d .. . . Total a + c b ~ d N Percent with no discrepancy = a x 100 a + c Gross difference rate Net difference rate In Table 20, the cases included in cell "a'' are those for which the specific diagnosis was coded as principal on both the Medicare rec- ord and IOM abstract. The total number of differences affecting that figure for any specific diagnosis is equal to the number of cases in- cluded in that class on the original Medicare record, but not on the IOM abstract (cell "c'.), plus the number included in that class on the

51 IOM abstract, but not on the Medicare record (cell "boy. Cell "d" includes all cases from the study population which do not have the specific diagnosis coded as principal on either data source. The sum of the number of cases in cells "b.' and "c," divided by the total number of cases in the population irrespective of diagnosis (N), may be termed the gross difference rate for the diagnosis in question. It reflects aggregate errors and usually includes differences in both -directions, which may be partly off-setting. The net difference rate is the difference between "b" and "c," divided by N. It is an estimate of the non-offsetting part of the gross error. A negative net differ- ence rate indicates that the influence of false positives is greater than false negatives. [1] Net and gross difference rates for the study diagnoses are in Appendix H. Net and gross difference rates are useful in comparing the relative ac- curacy of different diagnoses and for measuring changes in the reli- ability of data over time. In interpreting them, however, the reader should note that a change in the frequency of occurrence of a particular diagnosis in a population is not necessarily reflected in net and gross difference rates. The number of cases for which both assessments agree (cell "a") may change without altering net and gross difference rates. The implications for reliability of similar net and gross difference rates for diagnoses with dissimilar incidence rates may be quite dif- ferent. Therefore, the proportion of cases for which there is concor- dance between the abstract and re-abstract must be taken into account. If the concepts of false negatives and false positives are used in cal- culating admission rates and lengths of stay, the operational implica- tions of net and gross difference rates are easier to understand. Table 21 contains estimates of the distributions of specific diagnoses. Because of the absence of a population-based denominator customarily used to calculate admission rates, a proxy measure was computed based on the number of abstracts for Medicare patients with a particular diagnosis divided by the total number of Medicare admissions in the twenty percent sample. This is referred to as a "rate," although it is not, in the usual sense. The basic admission rates are based on the number of cases for which both the Medicare and IOM abstracts have the same principal diagnostic code (cell "a") divided by the total num- ber of admissions. The Medicare admission rates are calculated by div- iding the total number of Medicare records with a specific diagnosis (including false positives) by the total number of admissions. The IOM admission rates are calculated by dividing the total number of IOM ab- stracts with a specific diagnosis (including false negatives) by the 1 ~ U. S. Department of Commerce, Bureau of the Census, The Current Popula- tion Survey Reinterview Program: Some Notes and Discussion, Technical Paper No. 6 Washington, D. C.: U. S. Government Printing Office, 1963), pp. 8-9.

52 total number of admissions. The rates are analyzed to three and four digits. However, the IOM rates are the same for both four and three digits because the cases for which the Medicare records and IOM ab- stracts disagreed at only the fourth digit are shifted from cell '.b" to cell "a" in the three-digit comparisons. The numerator (a + b) remains the same and, therefore, the rate does not change.~2] As one would expect, the basic rates usually increase as one moves from four to three digits. The Medicare admission rates are consistently higher than the basic admission rates for both three and four-digit com- parisons, because they include the false positives. If t'h~ number of false positives is roughly equivalent to the number of false negatives, then the Medicare rates may be an acceptable approximation to the ''actual" rates. However, the IOM admission rates, which include the false nega- tives, are higher than the Medicare rates with the exception of chronic ischemic heart disease, diabetes, and malignant neoplasm of bronchus and lung. The under-estimation of admissions using Medicare data is partic- ularly noticeable for cerebrovascular disease and congestive heart fail- ure. This analysis can also be performed using cases from the entire DRG and comparing the diagnoses using the AUTOGRP classification system. When this approach is used (see Table 22), results are similiar to those obtained for the specific diagnoses within DRGs (see Table 21~. Medi- care data under-estimate the number of admissions with the exception of diabetes, miscellaneous diseases of the intestine and peritoneum, malignant neoplasm of the respiratory system, and, most importantly, ischemic heart disease. The influence of false positives and false negatives on length of stay may also be examined if the number of days is divided 'by the number of abstracts in the appropriate groupings of cells, as shown in Table 23. Four-digit lengths of stay for specific diagnoses are not consistently different from three digit. Lengths of stay based on Medicare data (including false positives) are about equally likely to be higher or lower than the corresponding basic numbers for both three and four- digit comparisons. This is also true for the IOM lengths of stay (including false negatives). With the exception of fracture neck of femur (where the IOM length-of-stay is about five days longer than either the basic or Medicare average), most differences are within a range of one day in either direction. When the entire DRG and AUTOGRP classification are used (see Table 24) it is equally difficult to detect consistent differences. The use of Medicare data to calculate diagnostic-specific admission rates may result in systematic distortions. The differences between IOM and Medicare data for diagnostic-specific lengths-of-stay are not consistent; nevertheless, they do exist. 2 The rates in Tables 21 through 24 were not adjusted to account for the small number of cases for which there were discrepancies and the Medicare records were correct. Such adjustments were made on an exploratory basis with the previous data set. The changes in the rates were minuscule and insufficient to justify the added complexity of the calculations.

53 Table 21. Influence of False Positives and Negatives on Proxy Admission Rates for Specific Diagnoses Within a Diagnosis Related Group ~ times 1, 000) Based on All Medicare Admissions in the Twenty Percent Sample Med ~ c are Basic admis sion admis signs TOM Princ ipal . . . alagnosls rate a/N Four- Three- _ digit digit rate a+c /N admit s s ions Four- Three- rate digit digit a+b/N Chronic ischemic heart . . . disease 36.2 38.0 96.7 98.5 52.2 Cerebrovascular diseases 40.1 47.1 50. 7 57. 7 71. 3 Fracture, neck of femur 14.4 19.1 15.7 20.4 22.3 Cataract 29.1 29.4 29.7 30.0 30. 7 16.4 18.5 22.2 24.3 27.4 Inguinal hernia without mention of obstruction 12.3 12.3 12. 7 12. 7 13.8 Diabetes mellitus 12.6 14.2 23.7 25.4 21.1 Hyperplasia of the prostate 18.6 18.6 21.3 21.3 22.4 Broncho pneumonia- organism not specified and pneumonia-organism and type not specified Cholel ithiasis/ cholecystitis 12.1 Intestinal obstruction without mention of hernia 5 .4 Congestive heart failure and le ft ventr icular failure Diverticulosis of the inte st ine Bronchi tis Mal ignant neoplasm of bronchus and and lung 9 .2 9 .2 11 . 6 11 .6 11 . 1 . 9.7 10.6 12.6 12.6 9.7 9.7 21.2 21.2 15.0 26.7 6.4 8.3 26.7 18.0 9.3 16.3 17.1 14.5 12.6 14.5 12.6 33.8 21 . 10.1 34.1 19.1 14.7

54 Influence of False Positives and Negatives on Proxy Admission Rates for all Diagnoses within a Diagnosis Related Group (times 1,000) Based on all Medicare Admissions in the Medicare Admissions in the Twenty Percent Sample . . Medicare Diagnosis Basic admission admission IOM related rate a/N rate a+c/N admissions group AUTOGRP AUTOGRP a+b/N . Ischemic heart disease . . . . . . . . . . .. except AMI 49.9 107.7 66.9 Cerebrovascular diseases 58.4 68.9 71. 3 Fractures 42.7 47.6 49.6 Diseases of the eye 35.9 37.2 38.0 Acute myocardial infarction 18.5 24.3 27 .4 Hernia of abdominal cavity 26.0 28.1 30.9 Diabetes mellitus 14. 2 25 .4 21 . 1 Diseases of the prostate 20.8 23.7 25.0 Pneumonia 26.3 30.5 36.9 Diseases of the gall bladder and bile duct 17.9 21.4 23. 7 Miscellaneous diseases of the intestine and peritoneum 11 .6 19.5 17. 7 Heart failure 10 .6 17.5 35.3 Enteritis, diverticula and functional dis- orders of intestine 16.0 18.5 23.9 Bronchitis 11.4 14.3 14.7 Malignant neoplasm of respiratory system 10. 3 13.6 12.0

55 Table 23. Influence of False Positives and Negatives on Average Lengths of Stay for Specific Diagnoses within a Diagnosis Related Group Based on all Medicare Admissions in the Twenty Percent Sample Basic length of stay Principal Four- Three- diagnosis __ digit digit Medicare length of stay Four- Three- IOM digit digit length of stay Chronic ischemic heart disease 10.0 10.0 10.7 10.7 1 n Cerebrovascular diseases 12.8 Fracture, neck of femur 20.8 Cataract Acute myocardial infarction 14.4 12.8 12.6 12.6 20.8 20. 1 20 .4 5.0 5.0 5.1 5.1 12.2 25.7 5.4 14.2 13.7 13.6 13.6 Inguinal hernia without mention of obstruction 7.1 7.1 7.2 7.2 7.3 Diabetes mellitus 10.9 10.6 12.2 11.9 10.7 Hyperplasia of the prostate 12.2 12.2 12.2 12.2 12.5 Bronchopneumonia- organism not specified · ~ and pneumon~a-organtsm and type not specified 10.9 10.9 11.3 11. 3 10. 7 Cholelithiasis/ cholecystitis 12.8 13.2 12. 1 12.5 13.4 Intestinal obstruction without mention of hernia 12.2 12.7 13.8 14.0 11.3 Congestive heart failure and left ventricular failure 9.4 9.2 10.0 9.8 11.3 Diverticulosis of intestine 8.3 8.3 9.0 9.0 10.6 Bronchitis 7.9 7.9 8.3 8.3 8.2 Malignant neoplasm of bronchus and lung _

56 Table 24. Influence of Fat se Positives and Negatives on Average Lengths of Stay for all Diagnoses within a Diagnosis Related Group Based on all Medicare Admissions in the I_ , DlagllO S1S rel ated group Baslc leng th of stay Med ic are leng th of stay IOM leng th of stay Ischemic heart disease except AMI 9.7 10.7 10.1 Cerebrovascular diseases 12.2 12.2 12.2 Fractures 19.1 18.5 18.9 Diseases of the eye 5.1 5 .3 5.2 Acute myocard ial infarction 14.2 13.6 13.6 Hernia of abdominal cavity 9.9 9.9 9.6 Diabetes mellitus 10.6 11.9 10. 7 Diseases of the prostate 12.0 11.8 12.2 Pneumonia 10.8 11.2 10.6 Diseases of the gall bladder and bile duct 13.0 12. 7 14.4 Miscellaneous diseases of the intestine and peritoneum 11.2 12.3 11. 0 Heart failure 10.1 10.4 11.7 Enteritis, diverticula and functional dis- orders of intestine 8.1 8.7 10.2 Bronchitis 7.9 8.2 8.2 Mal ignant neoplasm of respiratory system 13.0 11.8 13. 0

57 Influence of Hospital Characteristics To gain further insight into the influence of hospital characteristics on reliability of data, selected aspects of the process by which claims information is obtained within the hospital and forwarded to the fiscal intermediary were examined. Each hospital or abstracting process characteristic was cross-tabulated by the percent of abstracts for which there were no discrepancies between the Medicare record and the IOM abstract. The effect on diagnoses was measured at the four-digit, three-digit and AUTOGRP levels of comparison; the influence on procedures was also examined. A chi-square test of significance was calculated to determine the independence of the two variables.~3] As shown in Table 25, the influence of most variables was statistically significant. Interpretation is difficult, however, because the resulting relationships were not always consistent for all dependent variables. Occasionally the relationships were statistically significant, but of inter-correlations with other ct the quality of data. The more not mean~ngful--presumably because ~ variables which more directly affe important relationships are summarized below. Unless otherwise noted the effect of AUTOGRP was the same as the three-digit comparison. Table 25. Relationships Between Hospital and Abstracting Process Characteristics and the Accuracy of Information on Diagnosis and Procedure Four-digit Characteristics diagnosis diagnosis Procedures ~ , . Personnel and Training Training of billing Billing office Same as four- Not appro- personnel where they training with no digit priate for review portions of medical record procedure records for experience = diagnosis better data Training of personnel abstracting informa- where billing uses abstracted data Data from Same as four- Same as four- physicians and digit digit RRAs are better than ARTs or others 3 Because of the instability of the weighted numbers, the chi-square was based on a re-distribution of the unweighted numbers according to the weighted percentages. A statistically significant relationship was assumed if the chance of its occurrence was less than .05.

58 Table 25 continued . . . Four-digit Three-digit Characteristics diagnosis diagnosis Procedures Abstracting Process Source of abstracted Typed discharge Same as four- Copy of face data used by billing list or copy digit except sheet or en- of face sheet = admit sheet tire record = more accurate; or entire more accurate computerized record = data; typed discharge list = least accurate discharge least accurate list = least Description of diag- Diagnostic codes Same as four- Not appro- nostic data received more accurate digit diag- priate for by billing than narrative nosis procedure description Time lapse between Significant Significant Not appro- patient discharge but not but not priate for and transfer of meaningful meaningful procedure diagnostic infor- mation to billing Time lapse between patient discharge and determination of a final diag- nosls Significant Significant Not appro- but not but not priate for meaningful meaningful procedure Submission of up- Submission of Not signifi- Not appro- dated diagnostic updated infor- cant priate for information to mation = more procedure billing office accurate data Submission of up- Submission of Same as three- Not approp- cated diagnostic updated infor- digit priate for information to mation = more procedure the fiscal inter- accurate data mediar, Definitions used in Use of Medicare Same as three- Not signif- determing princi- definition = more digit icant pal diagnosis or accurate; first procedure listed = less accurate

59 Table 25 continued . . , Four-digit Three-digit Characteristics diagnosis diagnosis Procedures Hospital Characteristics ~ . Geographic region Northeast region = Same as four- less accurate data digit Population density Not significant Non SMSA = more accurate Same as four- digit Non SMSA = more accurate Control Not significant Not signifi- Proprietary = cant more accurate; voluntary = less accurate Bed size Smaller hospi- Same as four- tals = better digit data Some as four- . . c lglt The checklist included an item intended to elicit information about the training of the person reviewing the medical record to retrieve claims data, regardless of whether the function was performed in the billing office or elsewhere. In hospitals where portions of the medical record are transmitted to the billing office, persons trained in billing office procedures but without medical records experience were associated with better data than were persons without that training. Presumably, the training would include methods for retrieving diagnostic and procedural information from the medical record. Where a discharge list or some other summary of abstracted information is used by the billing office to complete the claim form, the data were better if the abstracted information was provided by either a physician or RRA. The reliability of data across categories was less consistently influenced by the source of abstracted information used by billing. This may suggest that the care with which the information is either recorded or abstracted and the training of persons involved in those functions is more important than the actual document (typed discharge list, computerized discharge list, face sheet, etc.) In any case, when the billing office was provided with diagnostic codes, rather than narrative information, the claims data tended to be more accurate. Similarly, the data were more accurate in hospitals where up-dated diagnostic information is regularly submitted to the billing department, as well as to the fiscal intermediary. The various definitions for principal diagnosis and principal procedure used by the study hospitals were expected to influence the reliability of data. The expectation was confirmed, but the findings are perplexing. The Medicare definition for principal diagnosis was associated with more

60 accurate data, despite the fact that the field team used the UHDDS definition as the basis for comparison. It is possible, however, that hospitals which profess to use the Medicare definition do not consistently apply it. Data were least accurate when the first-listed diagnosis on the face sheet was routinely used for designating a principal diagnosis. Definitions for principal procedure were not significantly associated with the accuracy of data. The accuracy of both diagnostic and procedure data varied by geographic region. Invariably, hospitals in the Northeast region provided less ac- curate data than hospitals in the South, West, or North Central regions. Hospitals outside a Standard Metropolitan Statistical Area (SMSA) pro- vided more accurate data than those located within a SMSA, although the differences were statistically significant only for diagnoses at three- digits and for principal procedure. Arrangements for hospital control did not influence the accuracy of diagnostic data, although proprietary hospitals had more accurate data for principal procedure. Hospitals with fewer beds were found to have more accurate data for both diagnoses and procedures. In an attempt to determine the relative influence of hospital charac- teristics on reliability of data, simple and multiple regressions were performed using the characteristics as independent variables. Census region was the only independent variable which was consistently as- sociated with the accuracy of diagnostic and procedure data. For all regressions, the amount of variance explained was low, reaching a max- imum of 0.125. The analysis of hospital and billing process characteristics may be useful in instituting program changes to increase the accuracy of diagnostic and procedure data. The reader should note, however, that this information was obtained informally during visits to the study hospitals and the degree of subjectivity in the responses could not be ascertained. In addition, several of the process characteristics may be correlated within a particular hospital, even though there was very little correlation among these characteristics for all hospitals combined. Despite these limitations, it appears that billing office personnel with training in billing procedures, but no medical record experience, may provide accurate diagnostic information if accurate information is provided by the medical record department. If RRAs abstract and code the information and submit it to the billing office, the data forwarded to the fiscal intermediaries tend to be more accurate. The role of physicians in recording patient information is an important variable. In addition, the management practice of having medical record departments submit updated diagnostic information to the billing office and the fiscal intermediaries aids in increasing the accuracy of data. Of the structural characteristics, only the geographic region of the country in which hospitals are located and hospital size were significantly and consistently linked with the accuracy of data.

Next: Summary and Recommendations »
Reliability of Medicare Hospital Discharge Records: Report of a Study Get This Book
×
 Reliability of Medicare Hospital Discharge Records: Report of a Study
MyNAP members save 10% online.
Login or Register to save!

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!