Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 212
Kidney Failure and the Federal Government 10 Reimbursement Effects on Quality OBRA 1987 asked this study to consider the effects of reimbursement on quality of treatment. This question arises from the need to understand the effects on patients of ESRD cost-containment efforts. Although there is substantial literature on reimbursement and growing literature on quality, the effects of the former on the latter have not been examined extensively in any area of medicine. An example of what is possible, however, is the recent set of reports from a group at Rand-UCLA-Value Health Sciences (Kahn et al., 1990a,b,c; Draper et al., 1990; Keeler et al., 1990; Rubenstein et al., 1990; Kosecoff et al., 1990; Rogers et al., 1990) examining the effects of the DRG-based Prospective Payment System on outcomes of care. No comparable assessment has been conducted for the ESRD program. What can be said about the relationship of reimbursement to quality in the ESRD context? Studies in this areas are limited; neither the federal government nor providers have systematically monitored the effect of changes in Medicare reimbursement on indices of quality. However, useful albeit scattered studies of this relationship have been published. Moreover, the committee commissioned new studies in certain areas. This chapter reviews the available information on the effects of reimbursement on dialysis patient mortality, hospitalization, dialysis unit staffing, and innovation. The focus is entirely on dialysis; there are no studies of the effect of reimbursement on quality in transplantation. EFFECTS OF REIMBURSEMENT ON MORTALITY As noted in Chapter 4, gross (or unadjusted) ESRD patient mortality has been increasing over time. With adjustment for age and diagnosis, however, mortality rates have been stable. Although an upward shift in mortal-
OCR for page 213
Kidney Failure and the Federal Government ity is observed between 1982 and 1983, this is attributed to patient characteristics, data reporting, and unexplained factors.1 In any case, these data do not bear directly on whether changes in Medicare ESRD affect mortality. The committee is aware of only two studies that attempt to assess this question directly. Both focused on the effects of implementation of the 1983 ESRD composite rate, which significantly reduced reimbursement to many facilities. The first, by Held and his colleagues at the Urban Institute (1987),2 noted a general increase in reported mortality, suggested that the increasing age of the patient population might account for much of this change, and observed that changing patient selection criteria might account for some increase in the proportion of sicker patients. Specifically, with respect to the effect of the composite rate on mortality, the study reported that ''evidence of a correlation between changes in mortality and the extent of composite rate changes was not found, i.e., mortality did not rise more where the composite rate changes were larger'' (Held et al., 1987) The authors noted, however, that further analysis was needed to draw firm conclusions. This 1987 Urban Institute study was done under pressure of time. The present IOM study subcontracted with the Urban Institute to perform further analysis and to update its earlier study. More follow-up data were evaluated, and an improved research design was used. This new report (Held et al., 1990b) uses patient data from two nonconcurrent, prevalent cohorts of patients who were in the system on January 1, 1982, and January 1, 1984. It uses price data for the 1982 and the 1984–87 reimbursement rates adjusted for area wage differences to capture the real resource costs across areas, an adjustment not undertaken in the earlier study. Two types of analyses ("models") are presented. The price-level model analyzes whether mortality is associated with variations in price levels among facilities at a given time. It analyzes whether mortality is higher at facilities receiving lower standardized prices (payments) during a specific year. Data from 1982 and from 1984 were analyzed in the new report. The first-difference model uses each facility as its own control by comparing the mortality rates in each facility at two different times. It examines whether the mortality at a facility changed when the price it received changed. Analyses from the price-level model suggest that there is an inverse correlation between patient mortality and the standardized price that dialysis facilities received. These data suggest that higher standardized prices tend to be associated with lower mortality rates. Results from the first-difference model, however, are less definitive. None of the correlation coefficients for the five primary disease groups (diabetes, hypertension, glomerulonephritis, other diseases, unknown) in the first-difference model is statistically significant. All five coefficients, however, consistently point in the same direction, i.e., that a larger decrease in standardized payment to a given
OCR for page 214
Kidney Failure and the Federal Government facility between 1982 and 1984 correlates with a higher mortality rate in that facility. Since the level of statistical confidence in the results is low, evidence from this analysis must be regarded as inconclusive. Held and colleagues (1990b) caution in their report that the findings "do not definitively substantiate the conceptual model proposed," that is, that increased mortality was the result of reduced payments. The committee regards the results of these two studies as suggestive but insufficient to establish firmly a direct effect of reimbursement on mortality. Since no other studies of this question are available, the committee concludes that the empirical evidence is not sufficient at present time definitively to support or to refute the hypothesis that past changes in reimbursement have adversely affected mortality directly. There is some evidence supporting an indirect effect. Held and colleagues (1990a) examined the effect of the 1983 composite rate price change on treatment time and of treatment time on the mortality during a 2-year followup period of the 1982 and the 1984 incident patient cohorts. They found that average treatment time decreased by 6 percent (from 5.0 to 4.7 hours) in independent units and by 7.8 percent (from 5.12 to 4.72 hours) in hospital-based units between 1982 and 1987. The percentage of hospital units reporting less than 4.5 hours treatment time increased from 25 percent to 37 percent during this period. Analysis indicated that a decrease in treatment time from 4.5 hours to 4.0 hours resulted in an increase in the relative risk of death of 1.06 (6 percent higher per year) over the follow-up period (statistically significant, P < .01). Findings from the hospital-based units alone, however, were not statistically significant (beta = 0.99, P = .48). Lowrie and Lew (1990) analyzed data on a sample of more than 12,000 patients treated in National Medical Care units who were alive at the end of 1988. Shorter treatment times were associated with higher mortality. After adjustment for differences in certain laboratory values, the relative risk of mortality was reduced but not eliminated. The committee believes that these data suggest the possibility that decreases in reimbursement may have led to increases in mortality indirectly via an economic incentive to shorten treatment times, which in turn led to increased mortality. (Shortened treatment periods reduce costs by permitting nurses/technicians to treat more patients in a given shift.) The committee believes that no firm conclusion can be drawn, however, because factors other than changes in payments have probably influenced the trend to shorter treatment times. Assessing the Effects of Reimbursement on Mortality Although the use of mortality as an outcome measure represents an important step toward the objective quantification of outcomes, there are reasons to think that the emphasis in available studies on the effect of the 1983 com-
OCR for page 215
Kidney Failure and the Federal Government posite rate change on mortality may be inappropriate. First, the interpretation of the indirect relationship mentioned above is complicated by the fact that clinical opinion, reinforced by patient preferences, may also have contributed to shorter treatment time.3 Second, as noted above, gross mortality has been increasing, but age-adjusted and diagnosis-adjusted mortality have been quite stable. This implies that the increasing age and complexity of the patient population may account for much of the observed increase in mortality. However, adjustment for ESRD patient complexity is somewhat crude at present, being limited to standardization for the effects of patient age, gender, race, and primary diagnoses. More refined means to adjust for patient complexity may be needed to detect significant mortality changes among subgroups of the patient population. (See Chapter 12.) Third, the decrement in reimbursement due to the composite rate was small compared to the effects of inflation. Any mortality effects of the composite rate, therefore, might be expected to be small. From 1974 to 1982 the nominal payment rate of $138 per treatment remained unchanged; when adjusted for inflation using the gross national product (GNP) price deflator, it fell nearly 50 percent in constant dollars by 1982 to the equivalent of $74.50 in 1974 dollars. In addition, other changes in reimbursement occurred at about the same time as the composite rate for dialysis was implemented. Medicare-as-secondary-payer in the first year of an ESRD patients' eligibility was authorized in OBRA 1981 (see Chapter 7) and the Prospective Payment System was instituted in 1983. Moreover, providers had a long time to anticipate and prepare for the changes introduced by the 1983 composite rate.4 The net economic effect of all these changes on hospitals and on independent facilities is difficult to assess and complicates attempts to determine the isolated effect of the composite rate on mortality. Finally, there are reasons to think that providers have adapted to reimbursement reductions in ways that dampened the effects on patient mortality. Dialysis units can respond to economic constraints in many ways that protect their core activity of providing dialysis treatment to ESRD patients. For example, they can accept lower "profits" (the difference between revenues and expenses); they can replace equipment more slowly than previously; and they shop more actively for deals and discounts among suppliers. These types of responses presumably deliver dialysis of unchanged quality at lower costs. On the other hand, they can replace higher-paid with lower-paid personnel, or they may cut services (presumably nonessential amenities first). These latter types of adaptations may be adequate to prevent or minimize increases in mortality but inadequate to avoid decreases in quality by other measures. For example, cutbacks in nutritionist and social worker staffing may increase morbidity and reduce patient quality of life, even though mortality does not increase.
OCR for page 216
Kidney Failure and the Federal Government In the mid-1980s, representatives of the provider community responded to the introduction of the 1983 composite rate and the observed increase in gross mortality by arguing that reductions in reimbursement were causing this increase. The committee's review of available studies of this issue, described in the preceding section, indicates that data in support of this contention are suggestive but by no means conclusive. On the other hand, when the Secretary of Health and Human Services, Dr. Otis Bowen, transmitted the 1987 Urban Institute report to Congress in December 1988, he interpreted the latter's cautious conclusions as showing no "negative effect on the quality of care provided to the Medicare beneficiaries receiving dialysis" (Bowen, 1988). The committee finds no basis for this definitive conclusion either. EFFECTS OF REIMBURSEMENT ON HOSPITALIZATION Another important outcome measure is ESRD patient morbidity. The only studies available have used hospitalization rates and lengths of stay as an index of morbidity. Data from Dialysis Clinic, Inc.-Cincinnati (DCI) on its entire historical patient population show that from 1976 to 1989 both the average and the median age of the patient population had increased, and patient severity, measured by the total number of ICD-9 diagnosis codes per patient at the time hemodialysis began, increased significantly (Pollack and Pesce, 1990). The number of hospital admissions for the prevalent 3-year patient cohorts increased substantially over the study years whereas the average duration of hospital stay declined more than 30 percent. Held (Urban Institute, personal communication, 1990) analyzed data for ESRD admissions and length of stay for incident dialysis patient cohorts from 1979 through 1985. Using 1982 as a reference year, he found that when controlled for age, gender, race, diagnosis, and days at risk, relative hospital admissions fell steadily from 1.25 in 1979 (i.e., 1979 admissions/ 1982 admissions) to 0.86 for 1985. Relative days of hospitalization fell from 1.73 in 1979 to 0.70 in 1985. In general, more ESRD patients are hospitalized than ever, because of increased age and complexity, although the relative risk of hospitalization for all but the elderly ESRD patients may be declining. What effect, if any, does reimbursement for outpatient dialysis have on the hospitalization of ESRD patients? The hypotheses to be considered are: First, the level of payment influences the level of resources available for dialysis (a lower price is hypothesized to reduce the quality of treatment, for example, by shortening treatment time or reducing well-trained personnel); second, less adequate treatment will lead to increased morbidity, as evidenced by a higher rate of hospitalization and longer length of stay. The 1987 Urban Institute report (Held et al., 1987) found that inpatient stays of a 1984 ESRD patient cohort were higher than those of a 1982
OCR for page 217
Kidney Failure and the Federal Government cohort. However, the study did not find an association between the reimbursement change introduced by the composite rate and greater patient morbidity as measured by in-hospital stays, at least in the first year after the rate was introduced. The update of the Urban Institute study done for this committee (Held et al., 1990c) differs from the 1987 study in several respects. It includes patients in independent units only, in contrast to patients in all units in the earlier study. It is based on two nonconcurrent prevalent patient cohorts, as of January 1, for the years 1982 and 1984, whereas the 1987 study relied on incident cohorts for these years. Each cohort was followed for 2 years. It uses a first-difference model and a price-level model to measure how changes in price affect morbidity. The study provides some evidence that the level of the dialysis price may affect hospital use by ESRD patients. The study design controlled for differences in the primary renal disease leading to ESRD, for known patient characteristics, and for the demographics of the region in which the dialysis unit is located. By the price-level model, the 1990 study found a significant inverse correlation between the dialysis price level and hospital admission rates as well as hospital length of stay (days). The analysis estimated that a decrease of $10 in the standardized dialysis price is associated with a 2 to 4 percent increase in hospitalization. These results were most conclusive for diabetic patients, an intuitively reasonable result, since these patients are clinically less stable and require extra care in management on dialysis. The persuasiveness of the conclusion is limited, however, because there was no correlation between price changes and hospital use when the data were analyzed by a different method in which each facility was used as its own control (first-difference model). Evidence from Held and colleagues (1990c) suggests that average dialysis treatment time affects hospitalization rates and days. They estimate that an increase of treatment time by one hour is associated with a decrease of 4 to 11 percent in hospital days, with diabetic patients experiencing the largest effect. Data on the effect of reimbursement on hospitalization of ESRD patients are limited. An earlier study by Held and colleagues (1987) showed no relationship between the introduction of the composite rate and hospitalization. The more recent study by the Held group (1990c) raises the possibility of an inverse relationship between the composite rate and hospitalization. One analytical method showed such an inverse association, but a second analytical method did not. Therefore, no firm conclusion can be drawn that there is a direct effect of reimbursement on morbidity (hospitalization). As with mortality, there is the suggestion that there may be an indirect effect mediated via changes in length of dialysis treatments, i.e., that decreased reimbursement is associated with decreased length of treatment, which in turn is associated with increased hospitalization.
OCR for page 218
Kidney Failure and the Federal Government EFFECTS OF REIMBURSEMENT ON UNIT STAFFING One important structural measure of quality is the level and composition of dialysis unit staffing. In this section, the changes in staffing that have occurred in the 1980s, and the causes and the consequences of these changes, are discussed.5 Both qualitative and quantitative changes in dialysis unit staffing occurred during the 1980s. Evaluation of the effects of reimbursement on staffing and of staffing on patient outcomes has been hampered by the lack of national statistics. In 1989, however, Held and colleagues (1990a) compiled a data set that included national data on facility staffing as well as on dialysis time and reimbursement. Data were extracted from 1982 and 1987 statistical cost forms for a sample of dialysis facilities. Using these data, they calculated medical staff hours per patient per week for independent as well as hospital-based outpatient dialysis units, including total, registered nurse (RN), licensed practical nurse (LPN), technician, nursing assistant, social worker, and dietitian hours. The results are shown in Table 10-1. Between 1982 and 1987, hospital-based and independent units reduced total staff hours per patient-week. Hospital-based units changed more than independent units: In 1982, they provided nearly 11 percent more total staff hours per patient-week than independents, but by 1987 the difference was only 1 percent. A major difference between hospital and independent units is that the latter average 2 hours less RN time per patient-week than hospital-based units, making greater use of LPNs, nursing assistants, and technicians. Hospital-based units, by contrast, report less social worker and dietitian time per patient-week. Data submitted to the committee at its May 1989 public hearing in Chicago are consistent with these national data. Dr. Raymond Hakim (1989), representing DCI, the largest chain of not-for-profit dialysis providers, testified regarding staffing changes: Even though patients are older and sicker, reductions in reimbursement and increases in labor costs have forced dialysis facilities to increase their patient to staff ratio. In 1981, the average patient-to-staff ratio at DCI was 3.9 to 1. However, in 1988, the average patient-to-staff ratio at DCI was 4.83 to 1. Similar testimony was presented by Dr. Edmund Lowrie, President of National Medical Care (NMC). Lowrie (1989) testified that whereas NMC units once maintained patient/staff ratios of 3:1, "ratios greater than 4:1 are now common." Furthermore, the ratio of licensed nursing staff to unlicensed patient care staff has been reduced in many NMC facilities. Other providers confirm this general trend. Dr. Alan Kanter (1989), President of North Central Dialysis Centers (NCDC,
OCR for page 219
Kidney Failure and the Federal Government TABLE 10-1 Outpatient Dialysis Units: Staff Hours per Patient-Week, 1982 and 1987 Hospital-based Units Independent Units 1982 1987 % Change 1982 1987 % Change Total 9.88 8.22 -16.7 8.91 8.12 -8.9 RN 5.76 4.94 -14.3 3.43 2.85 -17.0 LPN 1.25 1.12 -10.5 1.63 1.21 -19.4 NAS 0.41 0.22 -46.4 0.30 0.59 97.7 TECH 1.79 1.59 -11.4 2.76 2.69 -2.6 SW 0.39 0.22 -43.7 0.48 0.44 -8.6 DIET 0.28 0.14 -48.5 0.32 0.25 -22.0 NOTE: RN = registered nurse; LPN = licensed practical nurse; NAS = nursing assistant; TECH = technician; SW = social worker; DIET = dietitian. SOURCE: Held et al., 1990a. a four-unit proprietary organization in Chicago), presented the data on actual and projected staffing levels for 1986 through 1990 displayed in Table 10-2. Several general trends in the staffing of outpatient dialysis units are evident from these sources. First, patient-to-staff ratios increased substantially during the 1980s. Second, units have shifted away from highly trained and well-paid registered nurses toward less well-trained, lower-paid staff, mainly technicians. Third, patient hours with social workers and dietitians have also been reduced. Kanter (1989) described the NCDC experience: In the first decade of operation, our social service personnel provided many hours of family counselling for our patients. More recently these sessions have been all but discontinued. Our social workers now spend much of their time in crisis management and in routine activities such as those involved in providing patient transportation. They are increasingly responsible for obtaining and maintaining insurance information. This latter activity, so necessary for NCDC's fiscal health, is a sad waste of their professional skills. We have not increased the number of these personnel. Each social worker and dietitian now has a patient load about 30 percent bigger than in 1986. The use of LPNs and nursing assistants reveals no clear pattern or trend. It suggests, however, that facilities are searching for new staffing patterns. Kanter (1989) testified: "The increased number and percentage of licensed practical nurses in 1987 and 1988 produced inadequate economy of operation. We have started a new program to find and train experienced medical assistants and unlicensed immigrant nurses and physicians as dialysis technicians." Searching for an optimal mix that will deliver good care at the
OCR for page 220
Kidney Failure and the Federal Government TABLE 10-2 Staffing Changes, 1986–1990: North Central Dialysis Centers, Chicago, Illinois Year Category of Professional 1986 1987 1988 1989 1990 RNs 30.5 (68.5)a 22 (48.9) 17.3 (38.2) 16 (36.4) 17 (37.0) LPNs 12 (27.0) 22 (48.9) 27 (59.6) 19 (43.2) 11 (23.9) Technicians 2 (4.5) 1 (2.2) 1 (2.2) 9 (20.5) 18 (39.1) TOTAL FTEs 44.5 45 45.3 44 46 NOTE: RN = registered nurse; LPN = licensed practice nurse; FTE = full-time equivalent. a Figures shown are numbers of full-time-equivalent employees. Figures in parentheses are the percentage with respect to each column. SOURCE: Kanter, 1989. lowest possible cost is desirable, and it can be quite constructive. Under the current resource constraints, however, the search is an empirical exploration not informed by analysis relating inputs to outcomes and running increasingly counter to professional judgment. What Factors Are Causing These Changes? Unit managers—medical directors, owners, hospital administrators—whose revenues are constrained must reduce costs if they are to continue to provide dialysis services. Since payroll costs constitute a large and growing portion of total unit costs, the level and mix of staffing is a logical, indeed a necessary target of managerial attention. Held and colleagues (1990a), using a sample of dialysis units, analyzed the relation between the change in reimbursement introduced by the 1983 composite rate and unit staffing levels and composition. They found that independent units with higher standardized reimbursement rates in 1982 had higher levels of RNs and other medical staff (LPNs, nursing assistants, and technicians); social worker and dietitian hours per patient per week showed no clear pattern as a function of price differences. In 1987, there was a consistent relation between price and staff time per patient-week for other medical staff, including social workers and dietitians. The pattern for RNs
OCR for page 221
Kidney Failure and the Federal Government was similar to that in 1982, although the inverse relationship between reimbursement and RN staffing was statistically less than that in 1982. The data from hospital-based units revealed a different pattern. In 1987, hospital units with the highest standardized price did show the highest number of RN hours per patient-week. However, this relationship did not hold for other nursing personnel, social workers, or dietitians; their hours were stable regardless of standardized price. Moreover, 1982 data revealed no relationship between reimbursement and RN staffing. The same research group (Garcia et al., 1990), in a related analysis of the effect of the reduction in reimbursement between 1982 and 1984, found that independent units with cuts of $15 to $30 per treatment reduced RN and social worker/dietitian time more than those with cuts of less than $15. The three units with price cuts of more than $30 showed the largest reductions of RN and other medical hours per patient-week. Hospital-based units showed no clear pattern of RN hours as a function of the size of price cuts; they did reveal a clear relationship between the size of price cuts and decreases in other nursing staff, social workers, and dietitians. Held and co-workers (1990a) summarize the results of their analysis as follows: The staffing of dialysis units is affected by the Medicare price of dialysis. Both hospital and freestanding [i.e., independent] units show impacts of the Medicare price level. The hospital units appear to respond to the lower prices by lowering the entire staff to patient ratio. Freestanding units, perhaps because their total staff to patient ratio was already substantially lower than the ratio for hospitals at the beginning of the period under study, respond to lower prices by reducing the RN hours per patient with less change in the total staff hours per patient. What Are the Consequences of These Staffing Changes? Does reimbursement affect staffing? Studies indicate with reasonable certainty that the answer is yes. Do the observed changes in the level and composition of staffing represent a deterioration in quality? The answer is more difficult to obtain. Changes in the number and composition of dialysis staffs do not by themselves indicate that patient outcomes, the key measure of quality, have deteriorated. ESRD program data do not allow the relationship among structure, process, and outcomes to be analyzed definitively. The recent research by Held and colleagues (Garcia et al., 1990; Held et al., 1990b,c), however, does relate reimbursement (or price) to dialysis unit staffing (a structural measure), staffing to treatment time (a crude process measure), and treatment time to mortality (an outcome measure). On the relationship of reimbursement, staffing, and treatment time, they find that, for independent units, price affects RN hours per patient-week and dialysis treatment time. For hospi-
OCR for page 222
Kidney Failure and the Federal Government tal-based units, price affects total staffing hours but not RN hours or average dialysis treatment time. Although shorter treatment time has been correlated with patient mortality, staffing changes have not. Again, the results are suggestive but not conclusive. Measures and data that permit the analysis of relationships among reimbursement, staffing, treatment, and patient outcomes are needed. In the absence of such data, it is useful to discuss the potential implications of staffing changes on the quality of care delivered to patients. Nurses The institutional base of dialysis nursing has shifted in the past decade from hospital-based to independent units, as the latter have grown in number and total capacity. In both settings, demands on nurses, especially RNs, reportedly have increased because of a reduction of their absolute numbers, higher patient/nurse ratios, greater patient complexity, increasingly sophisticated technology, and diminished scheduling flexibility. The involvement of more technicians and other non-RN staff in delivering dialysis treatments has increased the supervisory responsibilities of RNs. The decrease in social workers and dietitians has meant that some of their responsibilities have fallen on nurses. Data on salaries of nephrology nurses are not available. There are suggestions that salary opportunities in other nursing fields make recruitment and retention in nephrology difficult. Reports from the American Nephrology Nurses Association (ANNA) indicate that support for educational programs, attendance at professional meetings, tuition reimbursement, and on-the-job orientation and training have declined. A 1984 survey reported low morale among dialysis unit head nurses (ANNA, 1985), but data on nursing satisfaction are not regularly collected by anyone. What are the implications of changing patterns of nurse staffing and especially the substitution of technicians for nurses? It should be emphasized that nurses are broadly trained to care for patients and then obtain specialized training in dialysis delivery. Technicians, by contrast, are trained only to deliver dialysis treatments. Specific effects of the staffing changes, then, include the following: direct clinical supervision of patients is reduced; clinical information on specific patients is less readily available to physicians as they make rounds; crisis management capability is reduced; and the probability of errors in comprehensive patient management is increased. The effect of substitution of technicians for nurses on patient management or patient outcomes has not been analyzed. At present the clinical judgment of unit medical directors and the policy of providers nevertheless indicate concerns about the effect of this substitution. NCDC, for example,
OCR for page 225
Kidney Failure and the Federal Government of needed services, they calculated that during 1989 the number of social workers was 37 percent of need at DCI-C, 34 to 39 percent at DCI, and 40 to 46 percent at Network 9. Using a more conservative Medicare requirement, the available social workers were 67 percent of need at DCI-C, 62–72 percent in all DCI, and 72–84 percent in Network 9 units. Research is needed to definitively examine the relationship of social services to patient outcomes. Dietitians Data on patient/staff ratios for renal dietitians are scarce but ratios ranging from 140/1 to 200/1 are estimated by knowledgeable individuals. The importance of nutrition to patient outcomes in dialysis is increasingly recognized. The existing dietitian/patient ratios may limit implementation of suitable dietary programs. In a recent survey, members of the Council on Renal Nutrition of the NKF expressed great interest in quality assurance (CRN, 1989). The opportunity exists in nutrition to examine carefully the relationship of certain professional input to the treatment process to determine the effects on patient management and patient outcomes. Implications of Changing Staff Patterns for Quality The combined effect of declining real reimbursement rates and increasing salaries and wages provides strong incentives to dialysis units to alter staffing patterns. In the committee's judgment, current patterns have not been determined by professional judgment of requirements for adequate care but rather represent a response to increasing economic constraints. The practical effect has been that units that staff only for a routine treatment process may have inadequate staff for crisis management and may have reduced social worker and dietitian resources to a minimum. If dialysis were simply an industrial production process, the search for appropriate staffing patterns would be determined largely by the homogeneity of inputs, opportunities for substituting capital for labor, and the balance between the routine and crisis demands of treatment. Dialysis units, however, confront a patient population ("input") that is growing more heterogeneous over time; treatment technology is unlikely to provide significant labor substitution opportunities in the near future (see below); and, as noted earlier, the limits of reductions in treatment time appear to have been reached. Changing personnel patterns in dialysis units represent empirical adaptations to scarce resources that have been made without a clear understanding of their clinical implications. Moreover, these changes are occurring without any ongoing assessment of their implications for the quality of patient care. As discussed in Chapter 12, this situation should be changed.
OCR for page 226
Kidney Failure and the Federal Government EFFECTS OF REIMBURSEMENT ON INNOVATION The committee also considered the effects of reimbursement on innovation. In general, economic discipline has encouraged cost-reducing, quality-enhancing technical change in the competitive dialysis equipment and supplies market. Such changes incorporate increased scientific and clinical knowledge about treatment as well as increased manufacturing knowledge. As a result, the equipment and supply (nonlabor) component of the total cost per dialysis treatment has fallen. Romeo (1984) estimated that the average price of all dialyzers purchased by hospitals from 11 manufacturers fell from an average of $20.10 in 1978 to $13.20 in 1983 when adjusted for inflation. Testimony before the February 1990 IOM public hearing indicated that the equipment and supply component fell from roughly one-third 15 years ago to one-fifth or less at present. Further cost-saving technical change, therefore, will afford smaller economic benefits to providers and payers. Major clinical innovations, such as cyclosporine for preventing rejection of the transplanted kidney and erythropoietin for treating anemia in dialysis patients, have resulted from progress in basic scientific research. Such developments are not likely to be influenced by ESRD reimbursement policy. Major innovations with unequivocal clinical benefits, such as the two cited, will diffuse into practice; when necessary, HCFA has modified its reimbursement policy to encourage such major developments. On the other hand, targeted research and development efforts to improve the technical aspects of dialysis may be discouraged by the knowledge that they are unlikely to be used if they increase costs. In the past, reimbursement has not been increased to pay for small technical improvements. The effect of Medicare reimbursement on innovation in the treatment of ESRD patients has not been examined systematically. It requires that distinctions be made between dialysis and transplantation, hemodialysis and peritoneal dialysis, incremental and major innovations, and periods of reimbursement policy. The IOM commissioned a paper (Levin et al., 1990) to assist it in this task, which, augmented by other sources, leads to the following appraisal. Hemodialysis Hemodialysis treatment systems have three major components: vascular access, surgically created, which allows the patient's blood to circulate repeatedly through the system; a semipermeable membrane blood path (known as the dialyzer) and connecting tubing; and a system that produces and monitors a dialyzing fluid (dialysate) to carry off toxins that diffuse from the blood through the membrane, that pumps blood through the dialyzer and monitors its pressure, and that operates alarms to protect against hazards in
OCR for page 227
Kidney Failure and the Federal Government the operation of the dialysis system. Innovations have occurred in each of these elements over time. Dialyzers have progressed from assembled structures of large volume and mediocre efficiency, to presterilized compact devices with much greater effectiveness. Geometries have included the coil, the flat plate, and hollow-fiber dialyzers, the last of which has dominated the market for more than a decade. The development of high-efficiency polymer membranes, now termed "high flux" because of their rapid passage of water at low pressures, had to await the development of equipment capable of continuous monitoring and control of fluid removal since staff monitoring was unable to provide safe control. In addition, high rates of water and solute removal required a shift from acetate-to bicarbonate-based dialysate because the former, when used with high-flux membranes, caused discomfort or injury to the patient. In turn, systems were required to proportion the bicarbonate separately from other dialysate components since bicarbonate dialysate could not be stored for long periods. Dialysis machines are now usually computerized to control and monitor fluid removal and dialysate proportioning. Initially, blood flow through the dialyzer was driven by the patient's own heart, but all current means of repeated dialysis use external blood pumps to drive blood through the system. The higher efficiency of water and solute transfer in the dialyzer has resulted in blood flow rates of 300 to 500 milliliters per minute (ml/min), up from the initial rate of 200 ml/min, with generally acceptable safety to and comfort for the patient. Rapid dialysis treatments offer shorter time on dialysis, and thus more patient shifts per day. They also require staff that is able to respond to problems that develop quickly and demand immediate attention. The prescription of dialysis therapy for each patient now spans a wider range of options than existed previously. Each prescription includes dietary instruction and monitoring, medications, and regular observation. Assessment of the actual and optimal amount of dialysis treatment administered in a session, a week, or a month is a source of controversy. The most accepted measure, based on urea removal by Kt/V calculation,3 was developed through a clinical trial sponsored by the Artificial Kidney/Chronic Uremia program of NIH. This program was terminated before the study was completed and, consequently, further refinement of the method and validation in controlled circumstances did not occur. The Interim Regulations of 1973 (38 Fed. Reg. 17210, June 29, 1973), which remained in effect for 10 years, appear to have affected innovation in several different ways. First, they imposed economic limits on all dialysis units and gave providers strong incentives to search for operational efficiencies. Provider incentives, in turn, encouraged manufacturers and suppliers to engage in substantial price competition in the competitive product market. In
OCR for page 228
Kidney Failure and the Federal Government the main, the net effect appears to have been a period of useful cost-reducing, quality-enhancing technical change. The Interim Regulations paid hospital-based outpatient units on a reasonable-cost basis not to exceed $138 per treatment unless an exception was granted. In fact, most hospital-based units received such exceptions and were being paid, on average, $159 per treatment by the early 1980s. This rate differential provided financial support for a limited amount of ESRD research in some academic institutions. By the end of this period, the economic discipline imposed by a fixed payment rate stimulated many dialysis units to begin to reuse dialyzers. Dialyzer reuse reported to the Centers for Disease Control increased between 1983 and 1987 from less than 20 percent of facilities to more than 60 percent. Advances in the technology of dialyzer reprocessing allowed multiple uses of one dialyzer on a single patient to divide the dialyzer cost per treatment. Without reuse, the cost of the new polymer membranes would have precluded their widespread use. The hollow-fiber dialyzer configuration, which is compact and rigid, permits monitoring of reuse by uniform techniques. Reuse was seen by some patients, however, as providers using second-hand goods to enhance profit, not patient care, and by others as potentially harmful. In most instances, open discussion between physicians and patients was effective in restoring confidence. Guidelines for the practice of reuse were developed by national consensus through the auspices of the Association for the Advancement of Medical Instrumentation (AAMI) and were a stimulus to ensuring quality control over reuse through standardized practices. AAMI guidelines were incorporated into regulations in 1988, and the reuse controversy receded. The introduction of the composite rate in 1983 had several general effects on providers and, derivatively, on innovation. It increased the severity of the economic discipline on all providers, further encouraging the search for efficiency. It also reduced the differential between hospital-based and independent dialysis units substantially, thus eliminating any research support from this source. Peritoneal Dialysis The composite rate affected peritoneal dialysis differently than it did hemodialysis. Peritoneal dialysis had been used as therapy for acute renal failure since the 1960s, but despite its definition as a treatment for chronic renal failure by Tenckhoff and Schecter (1968), it was little used in the 1970s. Oreopolis and colleagues (1978) and Popovich and co-workers (1978) conceived of CAPD, which, despite its low efficiency, would be effective because of its continuous process and would require no hardware. The
OCR for page 229
Kidney Failure and the Federal Government FDA approved the use of CAPD in 1978. Baxter Laboratories introduced the first CAPD kit in 1979, the year following legislation that sought to encourage home dialysis. CAPD appealed to HCFA as conceptually simple, possibly less costly than hemodialysis, and a way to encourage patients to dialyze at home. From 1979 until 1983, peritoneal dialysis was reimbursed on a cost basis. The composite rate, however, paid providers the same for an in-center and a home treatment, for both hemodialysis and peritoneal dialysis. The margins on CAPD fluids thus provided an economic incentive for its use, and as its clinical effectiveness improved, peritoneal dialysis became the treatment modality for more than 15 percent of patients. Dialysis Research Support The dialysis supply industry in the 1980s experienced substantial consolidation, consistent with the pattern that Romeo (1984) had anticipated earlier in the decade. The reduced sales volume resulting from reuse and the sustained pressure on prices left few resources for U.S. firms to pursue development of existing products or the discovery of new ones. In 1990, there were fewer firms worldwide than in 1980, and only a small number of these were American. Several U.S. firms have been acquired by European companies. Industrial investment in dialysis-related research and development has largely left the United States for Europe and Japan, where research costs are lower and where the industry has larger earnings. Numerous innovations in membranes, dialysate, materials, and applications were supported from the mid-1960s to the early 1980s by the NIH Artificial Kidney/Chronic Uremia program mentioned above. That program was phased out in the 1980s, in part because NIH concluded that industrial research was supporting the development needs of clinical dialysis. From 1984 through 1989, the National Institute of Diabetes and Digestive and Kidney Diseases supported no clinical research in dialysis treatment. This has had several effects. First, the flow of research results and ideas into the dialysis clinical community effectively ceased. The quality control of medical practice that is exercised by clinical research and the example it sets were lost from that arena. For instance, just as the NCDS provided a rationale and method for prescription dialysis monitoring, the source of research support to test and validate the study's results dried up. Second, the flow of research results to replenish the stock of ideas from which private firms draw inspiration for commercial products also ceased. The flow of results from publicly funded research results to private R&D and commercialization is complex, and the appropriate boundaries between the two sectors are never entirely clear. Yet the social rate of return on basic and applied research is estimated to exceed the return that private
OCR for page 230
Kidney Failure and the Federal Government firms receive (Mansfield et al., 1977). This argues for public support of research. If no public research investment is forthcoming, industry is unlikely to make up the difference. SUMMARY Some data suggest that decreased reimbursement may increase mortality. This effect may be direct or indirect (decreased reimbursement led to shorter dialysis time, which in turn led to increased mortality). However, available studies are insufficient to conclusively establish either a direct or an indirect effect of reimbursement on mortality. As with mortality, some studies suggest but do not prove a direct or indirect effect of decreased reimbursement to increase morbidity, assessed from hospitalization data. Data strongly suggest that decreased reimbursement has led to decreased staffing in dialysis units, to shifts from nurses to technicians, and to important reductions in social worker and dietitian staffing. There is no evidence that these changes in staffing patterns have affected quality. However, professional opinion favors this contention. Dialysis treatment time has decreased in the past decade. Studies indicate that decreased time may have adversely affected mortality and morbidity. The shorter times are attributable in part to economic pressure from decreased reimbursement. However, a clear relationship between quality and reimbursement cannot be established because clinical judgment and patient preference also have influenced shortening of dialysis times. Reimbursement policy appears to have had mixed effects on innovation. Initially, payment policy encouraged cost-reducing technical change in equipment and supplies. More recently, reductions in reimbursement and elimination of support for dialysis research appear to be restricting further technical improvements. Reimbursement policy does not appear to have restricted the development of major new clinical innovations such as cyclosporine and erythropoietin. In these instances, policy has been modified to permit their use. CONCLUSIONS Taken together, available data suggest but do not prove conclusively that previous decreases in reimbursement may have adversely affected quality. Conclusive evidence is unlikely a priori. The question of reimbursement effects on quality has received little careful study. Existing data systems were not designed to measure reimbursement effects on quality. Moreover, because many other changes were occurring concurrently with changes in reimbursement (e.g., in the ESRD patient population and in professional opinion), only large effects of reimbursement would be detectable.
OCR for page 231
Kidney Failure and the Federal Government The lack of progressive improvement in age-and diagnosis-adjusted dialysis patient mortality over the past decade suggests the possibility that providers may have reached the limits of increasing efficacy in the application of current technology. It is possible that age-and diagnosis-adjusted mortality would have improved over the past decade, as has happened with other medical conditions, had reimbursement not been eroded by reductions and by inflation. Cuts in reimbursement may have limited the development and introduction of new techniques that require increased physician and staff time or capital investment. Moreover, even if prior reductions in reimbursement had no effect, it does not follow logically that further decreases will not increase mortality. We may be at the edge of a "slippery slope," beyond which further cuts will have large effects because the ability of providers to absorb the effects of decreased resources without dangerously eroding quality have been reached. Because dialysis is a life-sustaining treatment, the committee concluded that it must give some weight even to imperfect data that point in the direction of adverse effects. Taken together, the suggestive evidence deserves attention because all results point in the same direction (decreased reimbursement may have eroded quality). Changes in quality are related temporally to changes in reimbursement, and the underlying behavioral and physiological hypotheses are plausible. Although none of these studies constitute conclusive proof of the adverse effects of prior reductions in reimbursement on patient outcomes, none rule out such an effect and none suggest that reimbursement reductions are contributing to improved quality of care. These data have important implications for reimbursement policy. Reimbursement issues are discussed and recommendations offered in Chapter 11. NOTES 1. Inconsistencies in the reporting of HCFA data occurred simultaneously with this mortality shift and, consequently, the change is believed to be, in part, an artifact of the data system. The data in question involve mainly elderly patients and those for whom the primary diagnosis was missing. Although some of these problems can be controlled by statistical techniques, a residual difficulty remains in comparing patients' mortality risk in the years before and after this shift. 2. OBRA 1986, enacted in October of that year, in addition to limiting the HCFA-proposed $6 reduction in the composite rate to $2, also directed the Secretary of DHHS to request the National Academy of Sciences to propose a study "to evaluate the effects of reductions in the rates of payment for facility and physicians' services under the Medicare program for patients with end stage renal disease on their access to care or on the quality of care." This study was to be submitted to Congress by January 1, 1988. The combination of the time limits imposed by the statutory deadline and HCFA submission requirements (September 30, 1987) and the complexity of the subject led the Institute of Medicine to decline to do this study. HCFA then asked the Urban Institute to conduct the study. They did so
OCR for page 232
Kidney Failure and the Federal Government and submitted a final report to HCFA in December 1987. The Department of Health and Human Services publicly transmitted the report to Congress one year later on December 30, 1988. 3. A vigorous discussion about the adequacy (or the appropriate "dose") of dialysis treatment is occurring at present (Hull and Parker, 1990). This topic is not new, having been the subject a cooperative clinical trial conducted in the late 1970s, known as the National Cooperative Dialysis Study (NCDS). The NCDS reflected the desire of many clinicians to develop a quantitative approach to the prescription of an individualized dialysis "dose" and to develop clinical predictors of successful treatment. The NCDS published preliminary results in 1981 (Lowrie et al., 1981) and its major findings in 1983 (Lowrie and Laird, 1983). The NCDS presented a complicated rationale and procedure for prescription dialysis that involved calculating and monitoring patients' nutritional status, the time-averaged concentration of urea (TACurea), and treatment time. The trial gave strong support to prescription dialysis and to a corollary message that sanctioned shorter treatment time. On the latter, the trial concluded that an adequate "dose" of dialysis could be achieved by calculating a patient-specific prescription and that the modeling required for such a prescription would "result in a smaller dose of therapy for a particular patient" than if it had not been done (Lowrie and Teehan, 1983). Although the other variables (nutrition, TACurea) were important, treatment time was the most easily controlled by clinicians. Gotch and Sargent (1985) reanalyzed the NCDS data and developed their formulation for calculating adequate dialysis treatment: Kt/V, where K = dialyzer clearance, t = total treatment time at the prescribed blood and dialysate flow rates, and V = the estimated volume of urea as a function of patient muscle mass and body weight. They showed that the dialysis procedure could be performed more precisely and more efficiently to fit the needs of the individual patient and that shorter treatment time was adequate for some patients. The findings of the NCDS and Gotch and Sargent coincided with the introduction of the composite rate in 1983 and the proposed reduction of 1986. Actual and proposed reimbursement policy encouraged the search for treatment efficiency and also intersected with the development of clinical opinion that, in effect, sanctioned shorter treatment time. Moreover, patient preferences, once shorter treatment became a clinical option, reinforced this development. Entering the 1990s, prevailing clinical opinion about prescription dialysis, adequacy, and treatment time has been challenged. The papers by Held and colleagues (1990a) and Lowrie and Lew (1990) were cited in the text. The latter found that low serum albumin was associated most strongly with probability of death, suggesting that malnutrition, especially among older patients, may contribute to mortality. Lowrie and Lew conclude, among other things, that "short dialysis time should be prescribed with caution, and, if used with individual patients, the treatment should be carefully monitored and controlled" (Lowrie and Lew, 1990, p. 480). In addition, Gotch and colleagues (1990) analyzed 101 transient patients treated by him in San Francisco and concluded that most prescription dialysis was empirical (i.e., did not use kinetic modeling), that half the prescriptions from these patients' home facilities were inadequate by NCDS criteria, and that length of dialysis was prescribed mainly as a function of BUN (with low BUN resulting in shorter treatment time). He described the relationship between the depressed appetite of dialysis patients, the poor nutrition of some patients, low BUN (which may follow from poor nutrition), and the consequent tendency to prescribe shorter treatment on the basis of BUN as a marker of adequacy as a "dangerous downward spiral." Sargent (1990) examined the delivery of prescription dialysis for 297 patients in 48 treatment units and found that many dialyses "deviate significantly from the intended
OCR for page 233
Kidney Failure and the Federal Government prescription," many falling outside NCDS guidelines because of poor treatment delivery. Two strategies for improving mortality, he concludes, are "(1) to aggressively monitor the delivery of treatment to assure that the intended dialysis is performed; and (2) to over-prescribe treatment to assure that even compromised dialyses still result in adequate therapy" (p. 509). Both strategies, it should be noted, have economic dimensions. The first requires greater clinical supervision; the second longer treatment times. These developments make it imperative that the relationships among reimbursement, dialysis prescription (including treatment time), patients' nutritional status and preferences, and patient mortality be clarified in the 1990s. 4. The legislation of June 1978 called for a prospective reimbursement rate for outpatient dialysis. Although implementing regulations, proposed in 1980, were shelved by the new Reagan administration in 1981, OBRA 1981 restated the congressional requirement that HCFA establish a prospective rate. In February 1982, HCFA published a Notice of Proposed Rulemaking regarding the proposed reimbursement policy (47 Fed. Reg. 6556, February 12, 1982) and promulgated a final rule in May 1983 (48 Fed. Reg. 21255, May 11, 1983). 5. The committee hosted a workshop of nurses, social workers, and dietitians in November 1989 to discuss the implications of staffing changes. The participants in that workshop are listed in Appendix G. A draft paper was prepared and circulated for comment. Representatives of technicians also reviewed the document and provided useful information. 6. The State of California is an exception. Since 1982, it has required that dialysis units have a training and examination program, certified by the state, that includes fluids and electrolytes, kidney disease and treatment, dietary management, principles of dialysis, dialysis technology, and dialysis patient care. 7. The Board of Nephrology Nurses and Technicians offers a test to technicians who voluntarily choose to be examined. These test results, however, have no official status with any state regulatory body. 8. State health codes in Connecticut, New York, Texas, and elsewhere provide a focus for conflicts between nurses and technicians. 9. The survey was mailed to 1,576 facilities; responses were received from 345 social workers. Respondents' facilities included 6 for-profit hospital-based units, 113 not-for-profit hospital units, 163 independent for-profit units, and 41 independent not-for-profit units. 10. Individuals with experience in ESRD-related social work before 1976 but without MSW degrees were "grandfathered" into eligibility. References ANNA (American Nephrology Nurses Association). 1985. 1985 head nurse survey results. ANNA Update. May-June. Bowen OR. 1988. Report to Congress: Impact of the Changes in the ESRD Composite Rate. Washington, D.C.: Department of Health and Human Services. December. CRN (Council on Renal Nutrition). 1990. CRN Membership Survey Results. New York: National Kidney Foundation. Draper D, Kahn KL, Reinisch EJ, et al. 1990. Studying the effects of the DRG-based Prospective Payment System on quality of care. JAMA 264:1956–1961. Garcia J, Held PJ, Cahn MA, Pauly MV. 1990. Staffing of dialysis units and the price of dialysis. Paper prepared for the Institute of Medicine ESRD Study Committee. Washington, D.C.: Urban Institute. January 18. Gotch FA, Sargent JA. 1985. A mechanistic analysis of the National Cooperative Dialysis Study (NCDS). Kidney Int 28:526–534.
OCR for page 234
Kidney Failure and the Federal Government Gotch FA, Yarian S, Keen M. 1990. A kinetic survey of US hemodialysis prescriptions. Am J Kidney Dis 15:511-515. Hakim R. 1989. Testimony presented to a public hearing of the IOM ESRD Study Committee. Chicago. May 5. Held PJ, Bovbjerg RR, Pauly MV, Garcia JR, Newmann JM. 1987. Effects of the 1983 ''Composite Rate'' Changes on ESRD Patients, Providers, and Spending. Washington, D.C.: Urban Institute, December 21. Held PJ, Garcia JR, Pauly MV, Cahn MA. 1990a. Price of dialysis, unit staffing, and length of dialysis treatments. Am J Kidney Dis 15:441–450. Held PJ, Garcia JR, Pauly MV, Wolfe RA, Gaylin DS, Cahn MA. 1990b. Mortality and the price of dialysis. Paper prepared for the Institute of Medicine ESRD Study Committee. Washington, D.C.: Urban Institute. June 19. Held PJ, Garcia JR, Wolfe RA, Gaylin DS, Pauly MV, Cahn MA. 1990c. Price of dialysis and hospitalization. Paper prepared for the Institute of Medicine ESRD Study Committee. Washington, D.C.: Urban Institute. June 19. Hull AR, Parker TF. 1990. Introduction and Summary: Proceedings from the Morbidity, Mortality, and Prescription of Dialysis Symposium (Dallas, Tex., Sept. 15–17, 1989). Am J Kidney Dis 15:375–383. Kahn KL, Rubenstein LV, Draper D. 1990a. The effects of the DRG-based Prospective Payment System on quality of care for hospitalized Medicare patients: An introduction to the series. JAMA 264:1953–1955. Kahn KL, Rogers WH, Rubenstein LV, et al. 1990b. Measuring quality of care with explicit process criteria before and after implementation of the DRG-based Prospective Payment System. JAMA 264:1969–1973. Kahn KL, Keeler EB, Sherwood MJ, et al. 1990c. Comparing outcomes of care before and after implementation of the DRG-based Prospective Payment System . JAMA 264:1984–1988. Kanter A. 1989. Testimony presented to a public hearing of the IOM ESRD Study Committee. Chicago. May 5. Keeler EB, Kahn KL, Draper D, et al. 1990. Changes in sickness at admission following the introduction of the Prospective Payment System. JAMA 264:1962–1968. Klahr, Saulo, M.D., President, National Kidney Foundation. Letter of November 29, 1989, to Louis B. Hayes, Acting Administrator, Health Care Financing Administration. Kosecoff J, Kahn KL, Rogers WH, Reinisch EJ, Sherwood MJ, Rubenstein LV, Draper D, Roth CP, Chew C, Brook RH. 1990. Prospective Payment System and impairment at discharge. JAMA 264:1980–1983. Levin NW, Keshaviah P, Gotch FA. 1990. Effect of reimbursement on innovation in the ESRD program. Paper prepared for the Institute of Medicine ESRD Study Committee. New York: Beth Israel Medical Center. Lowrie EG. 1989. Testimony presented to the IOM ESRD Study Committee. Chicago. May 5. Lowrie EG, Laird NM, Parker TF, Sargent JA. 1981. Effect of the hemodialysis prescription on patient morbidity. N Engl J Med 305:1176–1181. Lowrie EG, Laird NM (eds.). 1983. Cooperative Dialysis Study. Kidney Int 23(Suppl 13). Lowrie EG, Lew NL. 1990. Death risk in hemodialysis patients: The predictive value of commonly measured variables and an evaluation of death rate differences between facilities. Am J Kidney Dis 15:458–482. Lowrie EG, Teehan BP. 1983. Principles of prescribing dialysis therapy: Implementing recommendations from the National Cooperative Dialysis Study. Kidney Int 23(Suppl 13): S113–S122. Mansfield E, Rapoport J, Romeo A, Wagner S, Beardsley G. 1977. Social and private rate of return from industrial innovations. Quarteerly J Econ 91(May):221–240. Morford TG, Director, Health Standards and Quality Bureau, Health Care Financing Adminis
OCR for page 235
Kidney Failure and the Federal Government tration. Letter of March 15, 1990, to Saulo Klahr, M.D., President, National Kidney Foundation. Oreopolis DG, Robson M, Isatt S, Clayton S, De Veber GA. 1978. A simple and safe technique for continuous ambulatory peritoneal dialysis (CAPD). Trans Am Soc Artif Intern Organs 24.478. Pollack VE, Pesce A. 1990. Analysis of data related to the 1976–1989 patient population: Treatment characteristics and patient outcomes. Report to the Institute of Medicine. Cincinnati: Dialysis Clinic, Inc.-Cincinnati. Popovich RP, Moncrief JW, Nolph KD, et al. 1978. Continuous ambulatory peritoneal dialysis. Ann Intern Med 88:449. Rogers WH, Draper D, Kahn KL, et al. 1990. Quality of care before and after implementation of the DRG-based Prospective Payment System. JAMA 264:1989–1994. Romeo AA. 1984. The Hemodialysis Equipment and Disposables Industry. OTA-HCS-32. Washington, D.C.: Office of Technology Assessment. Rubenstein LV, Kahn KL, Reinisch EJ, et al. 1990. Changes in quality of care for five disease measured by implicit review, 1981 to 1986. JAMA 264:1974–1979. Sargent JA. 1990. Shortfalls in the delivery of dialysis. Am J Kidney Dis 15:500–510. Smith, Wayne, Director, Office of Survey and Certification, Health Standards and Quality Bureau, Health Care Financing Administration. Letter of August 3, 1989, to Dolph R. Chianchiano, Associate Director, National Kidney Foundation. Tenckhoff H, Schecter H. 1968. A bacteriologically safe peritoneal access device. Trans Am Soc Artif Intern Organs 14:181–183.
Representative terms from entire chapter: