National Academies Press: OpenBook

Measurement and Interpretation of Productivity (1979)

Chapter: Measuring Outputs in Hospitals

« Previous: Revisions in BLS Output per Hour
Suggested Citation:"Measuring Outputs in Hospitals." National Research Council. 1979. Measurement and Interpretation of Productivity. Washington, DC: The National Academies Press. doi: 10.17226/9578.
×
Page 255
Suggested Citation:"Measuring Outputs in Hospitals." National Research Council. 1979. Measurement and Interpretation of Productivity. Washington, DC: The National Academies Press. doi: 10.17226/9578.
×
Page 256
Suggested Citation:"Measuring Outputs in Hospitals." National Research Council. 1979. Measurement and Interpretation of Productivity. Washington, DC: The National Academies Press. doi: 10.17226/9578.
×
Page 257
Suggested Citation:"Measuring Outputs in Hospitals." National Research Council. 1979. Measurement and Interpretation of Productivity. Washington, DC: The National Academies Press. doi: 10.17226/9578.
×
Page 258
Suggested Citation:"Measuring Outputs in Hospitals." National Research Council. 1979. Measurement and Interpretation of Productivity. Washington, DC: The National Academies Press. doi: 10.17226/9578.
×
Page 259
Suggested Citation:"Measuring Outputs in Hospitals." National Research Council. 1979. Measurement and Interpretation of Productivity. Washington, DC: The National Academies Press. doi: 10.17226/9578.
×
Page 260
Suggested Citation:"Measuring Outputs in Hospitals." National Research Council. 1979. Measurement and Interpretation of Productivity. Washington, DC: The National Academies Press. doi: 10.17226/9578.
×
Page 261
Suggested Citation:"Measuring Outputs in Hospitals." National Research Council. 1979. Measurement and Interpretation of Productivity. Washington, DC: The National Academies Press. doi: 10.17226/9578.
×
Page 262
Suggested Citation:"Measuring Outputs in Hospitals." National Research Council. 1979. Measurement and Interpretation of Productivity. Washington, DC: The National Academies Press. doi: 10.17226/9578.
×
Page 263
Suggested Citation:"Measuring Outputs in Hospitals." National Research Council. 1979. Measurement and Interpretation of Productivity. Washington, DC: The National Academies Press. doi: 10.17226/9578.
×
Page 264
Suggested Citation:"Measuring Outputs in Hospitals." National Research Council. 1979. Measurement and Interpretation of Productivity. Washington, DC: The National Academies Press. doi: 10.17226/9578.
×
Page 265
Suggested Citation:"Measuring Outputs in Hospitals." National Research Council. 1979. Measurement and Interpretation of Productivity. Washington, DC: The National Academies Press. doi: 10.17226/9578.
×
Page 266
Suggested Citation:"Measuring Outputs in Hospitals." National Research Council. 1979. Measurement and Interpretation of Productivity. Washington, DC: The National Academies Press. doi: 10.17226/9578.
×
Page 267
Suggested Citation:"Measuring Outputs in Hospitals." National Research Council. 1979. Measurement and Interpretation of Productivity. Washington, DC: The National Academies Press. doi: 10.17226/9578.
×
Page 268
Suggested Citation:"Measuring Outputs in Hospitals." National Research Council. 1979. Measurement and Interpretation of Productivity. Washington, DC: The National Academies Press. doi: 10.17226/9578.
×
Page 269
Suggested Citation:"Measuring Outputs in Hospitals." National Research Council. 1979. Measurement and Interpretation of Productivity. Washington, DC: The National Academies Press. doi: 10.17226/9578.
×
Page 270
Suggested Citation:"Measuring Outputs in Hospitals." National Research Council. 1979. Measurement and Interpretation of Productivity. Washington, DC: The National Academies Press. doi: 10.17226/9578.
×
Page 271
Suggested Citation:"Measuring Outputs in Hospitals." National Research Council. 1979. Measurement and Interpretation of Productivity. Washington, DC: The National Academies Press. doi: 10.17226/9578.
×
Page 272
Suggested Citation:"Measuring Outputs in Hospitals." National Research Council. 1979. Measurement and Interpretation of Productivity. Washington, DC: The National Academies Press. doi: 10.17226/9578.
×
Page 273
Suggested Citation:"Measuring Outputs in Hospitals." National Research Council. 1979. Measurement and Interpretation of Productivity. Washington, DC: The National Academies Press. doi: 10.17226/9578.
×
Page 274
Suggested Citation:"Measuring Outputs in Hospitals." National Research Council. 1979. Measurement and Interpretation of Productivity. Washington, DC: The National Academies Press. doi: 10.17226/9578.
×
Page 275

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Measuring Outputs in Hospitals W. RICHARD S C OTT Stanford University INTRODUCTION A major hindrance to the assessment of productivity in the service and public sectors is the difficulty of measuring outputs. Rather than at- tempting to address this problem broadly and abstractly, I propose to focus on a particular instance of it by examining the difficulties in mea- suring medical care outputs in hospitals. By concentrating on a single service industry (and on a subset of its output), I hope to illustrate both the difficulties encountered in defining and assessing outputs in service industries generally as well as some approaches being developed to over- come these problems. I also find it helpful to work at the level of the firm in this case, the hospital rather than at a more highly aggregated level of analysis. Since outputs are produced by firms or organizations, when there are difficulties in conceptualizing and measuring outputs, it seems sensible to seek clarity at the firm level. While several approaches to the study of outputs in hospitals will be described, I will discuss in greater detail in the final section of the paper some of the work my colleagues and I have carried out at the Stanford Center for Health Care Research. ' My principal collaborators at the Stanford Center have been William H. Forrest Jr. director; Byron Wm. Brown, Jr.; Wayne Ewy; and Ann Barry Flood. 255

256 HOSPITAL PRODUCTIVITY PAPERS Productivity indexes focus attention on the ratio of outputs to inputs. Measuring outputs in hospitals is viewed as of particular importance because of the very rapid increase over the past two decades in the cost of hospital care. The cost of a day of hospital care is currently increas- ing at an annual rate of more than 15 percent, a rate greater than that exhibited by any other sector of the economy. There is no doubt that both labor and capital inputs to hospitals have been rapidly increasing over the recent period. With the aid of Feldstein's (1971) comprehensive analysis of The Rising Cost of Hospital Care. which has recently been updated and extended (Feldstein and Taylor 1977), we can briefly re- view the recent changes in the principal input factors. Feldstein's analysis suggests that, allowing for the effects of inflation, non-labor costs have increased even more rapidly than labor costs. During the period 1955-1975, non-labor costs have increased at an annual rate of 11 percent. Using the consumer price index to measure changes in input prices, Feldstein and Taylor (1977, p. 24) estimate that the 11-percent rate can be decomposed into a 7.2-percent rise in the volume of inputs as compared to a 3.6-percent rise in the price of the inputs. Fuchs (1974, p. 93) reports a similar conclusion, noting that about 1960 the costs of non-labor inputs began to grow more rapidly than labor expenditures. Labor costs in hospitals have also risen faster than those of the rest of the economy during the period under review, an average annual in- crease of 9 percent. Labor inputs can also be decomposed into price and volume increases: labor costs have increased both because of higher wage rates and because of increases in the number of employees per patient day. Feldstein and Taylor (1977, p. 22) estimate that this overall increase in labor costs can be decomposed into a 6.3-percent increase in earnings per employee and a 2.6-percent increase in the number of employees per patient day. Fuchs (1974, p. 94) notes that the increase in the number of employees is particularly characteristic of the period after 1965: the number of personnel per patient after 1965 increased by 3.4 percent annually, compared with a rate of 1.7 percent from 1960 to 1965. There is some dispute as to whether changes in the composition of the labor force over time have also contributed to the increase in costs. A number of observers (e.g., Lave 1966) have suggested that such changes are an important factor, but while Feldstein agrees that there have been changes in the types and training of hospital employees, he concludes from his analysis that '`there is no evidence of substantial overall change in the

Measuring Outputs in Hospitals 257 average job level of hospital employees" (Feldstein 1971, p. 57~. A more detailed analysis of changes in occupational composition over the 4-year period 1970-1974 in a stratified, random sample of 17 hospitals re- ported no change in the ratio of patients to nurses but a small shift toward a higher proportion of LPNS. The largest changes observed were the increased number of MDS employed by hospitals and increases in the number of residents and interns per patient load (Stanford Center for Health Care Research 1977, p. 164~. Feldstein and Taylor (1977, pp. 21-26) develop an index of the total quantity of inputs used per patient day that combines the costs of labor and non-labor inputs after adjusting for inflation. In terms of 1967 prices, they estimate that total real inputs per patient day rose from $36.52 in 1955 to $89.14 in 1975, an average annual increase of 4.6 percent per year. Since changes in the quantity of inputs per unit of output reflect changes in overall resource productivity and changes in the nature of the product, we must either conclude that there have been changes in the output of hospitals or that the total productivity in hos- pitals has decreased at 4.6 percent during the period under review. Most observers are unwilling to accept this dismal conclusion and instead insist that a day of hospital care is not the same now as it was in 1965 or 1955. Feldstein and Taylor (1977, p. 2) embrace this alternative conclu- sion: "The unusually rapid increase in the cost of a day of hospital care reflects a change in the character of hospital services rather than a higher price for an unchanged product." Further, the product of hospitals is alleged to vary not only over time but also at any given time across hospitals. Thus, Lave (1966, p. 58) argues, The average patient day of a 30-bed rural hospital is vastly different Prom that of a 600-bed teaching hospital. In comparing the cost per patient day in two hos- pitals, the difference in product must be taken into account. (Not taking account of the difference is analogous to ignoring the difference between a Volkswagen and a Rolls Royce. . . ) Claims of variation in the product of hospitals over time or across facil- ities can only be verified by developing accurate methods of assessing the outputs of hospitals. ASSESSING OUTPUTS Hospitals are multi-product firms. While their primary products are those associated with inpatient care, most hospitals also produce varying

258 PAPERS quantities of education, research, community service, and outpatient care. Each of these types of outputs is intrinsically difficult to measure, and to attempt to combine them raises further issues of commensura- bility and relative importance or value. For present purposes we restrict attention to the primary output: in- patient care activities. Doing so will simplify our problems but by no means eliminate them. It is conventional and useful to distinguish be- tween two general types of outputs of health care organizations such as hospitals: processes and outcomes. Briefly, processes refer to actions or activities performed on the object of work (e.g., the patient), while outcomes refer to changes which occur in the state or the condition of the materials or objects worked on. Both types of measures have their advocates and critics, and we learn something about the output of hospitals from each type. Unfortunately, what is learned may not be consistent, as will be discussed below. PROCESS MEASURES OF HOSPITAL CARE Process measures focus on the functioning of the hospital as indicated by the number and type of services performed for patients. Some mea- sures of services emphasize quantity without respect to qualitative dif- ferences; others emphasize quality of services performed. Process mea- sures of quantity are intended to answer questions like, What was done to the patient? How many times per day was the activity carried out? Over how many days was the work performed? It is useful to distinguish between service intensity the number or amount of services received by a patient during a specie ed period- and service duration the length of time over which the services were extended. Service intensity is typi- cally measured either per day or per illness episode. Service duration is usually measured by length of stay and is often limited to those types of services, e.g., hotel, nursing, food, that all patients utilize regardless of specific illness. As might be expected, service intensity and duration tend to be inversely correlated. It is important to recognize that the types and amounts of services provided differ greatly among hospitals and over time. These differences arise chiefly from the differing requirements for service of the types of patients in each hospital. Several researchers have established that patient populations differ significantly among hospitals (see Feldstein 1967, Lave and Lave 1971, Stanford Center for Health Care Research 1974, 1976~. But these differences can also arise from variations among hospitals in the modes of treatment. The first source of variation arising from differences in patient populations served addresses the question,

Measuring Outputs in Hospitals 259 Do sicker patients receive more services? The second source of variation due to differences in treatment modes asks, Do patients of a similar type receive different amounts or types of service in different hospitals?, and also allows us to address such questions as, Do more services or more costly services result in better-quality care? Process measures of quality tend to focus on the provider's conformity to established procedures of "good practice." Attention is focused on activities that are believed to directly ai'l'ect the quality of the product. Much recent effort has been devoted to the development and testing of' protocols and audit procedures to assess the quality ot' patient care activities within hospitals. As might be expected, systematic approaches to the assessment of care processes have been developed for and spread most rapidly among nursing units (e.g., Carter et al. 1972, Phaneui' 1972, Browning 1974), but audit procedures for physicians are also under development (Lembcke 1967, Payne 1966, Payne et al. 1976, American Hospital Association 1972, Institute of Medicine 1976' and in some form are gradually being implemented in most U. S. hospitals under the newly established Professional Standards Review Organiza- tions (PSRO,S). An example of a carefully defined process measure of quality is the physician performance index (PP~), a weighted series of hospital service items related to history, physician examination, labora- tory, and radiological services and therapy that compares actual practice to a set of' optimal-care criteria developed for a variety of diseases (Payne et al. 1976~. it is important to emphasize that all process measures evaluate con- i'ormity to a given standard of' performance but do not evaluate the adequacy or correctness of the standards themselves. They rest on the assumption that it is known what activities are required to produce desirable outputs (Suchman 1967, Scott 19771. Such an assumption may not be warranted in situations where the work performed is both com- plex and uncertain, as is often the case in the delivery of' medical care. OUTCOME MEASURES OF HOSPITAL CARE Outcome measures focus attention on the characteristics of' lllatelials Ol' objects on which the organization performs. As opposed to process measures, which assess el'l'ort or activities, outcome indicators Incus on what el'l'ects have actually been achieved. Even it' attention is restricted to inpatient care, several types of' relevant outcomes leave beers sug- gested. In addition to noting that medical care is supposed to make a contribution to health, Fuchs (1968, p. 118) argues that physicians p~o- vide a "validation service,'' that is, an evaluation of an individual's

260 PAPERS health status, that is itself of value, e.g., for insurance purposes; and Feldstein (1971, p. 39) points out that in addition to the effect on health, hospital outputs include the level of comfort afforded the patient during the period of hospitalization and reduction in the patient's uncertainty, a product perhaps improved by more elaborate diagnostic procedures. All observers would agree, however, that the outcomes of primary significance are changes in the health status of the patient. As with processes, it is possible to assess the quantity and the quality of outcomes produced. When the products are highly uniform, the number of units produced or persons served is a useful measure of outcome. However, since individual patients are highly varied in their condition and need, and it is by no means the case that hospitals draw in a random fashion from the patient population to be served, measures of quantity such as number of patients discharged are of little use. Most analysts regard outcome measures as the quintessential indicators of quality of care. Thus, in his review of measures of health care quality, Donabedian (1966, p. 169) concludes that "outcomes, by and large, re- main the ultimate validators of the effectiveness and quality of medical care." Improvement in health status would seem to be the most appro- priate measure of hospital outputs. Some analysts, however, disagree. Mann and Yett (1968, pp. 196-197), for example, make the following com- parison: "There are those who argue that the output of a health facility should be specified in terms of its effect on the patient... We re- ject this definition of hospital output for the same reason that we do not regard the output of a beauty salon as beauty." Although they do not spell the argument out, Mann and Yett apparently believe that (1) beauty is difficult to measure accurately and (2) individuals differ in natural beauty for which variations the beauty shop should not be held accountable. The first problem is especially severe for outcomes. The second is a problem which plagues both process and outcome measures, as already noted. Let us see how health care researchers have dealt with these two problems. An accurate assessment of outcomes is particularly difficult when the outcomes of interest involve changes in an underlying state or process- changes in health status or well-being not directly observable. Hence, questions of the validity and reliability of the indicators employed are especially critical. As Fuchs (1968, p. 118) notes, most measures take a negative approach, making inferences about health from measures of the degree of ill health as indicated by mortality, morbidity, Liability, etc. Death is a fairly reliable measure but is a relatively rare event and may thus be highly insensitive to the health processes of interest (Payne, et al. 1976, pp. 29-311. Indicators of morbidity and return to function

Measuring Outputs in Hospitals 261 are much less reliable, depending on some subjective assessments by either practitioners or patients. Also, the question as to when outcomes are assessed is often critical: some effects may not be discernible until long after the medical intervention has occurred and the patient has been discharged. Nevertheless, with all their problems, outcome mea- sures focus attention on effects as opposed to efforts and hence are more likely to avoid the fallacy of using measures of inputs as surrogates for outputs. ADJUSTING PROCESS AND OUTCOME MEASURES FOR DIFFERENCES IN PATIENT MIX It has been asserted that hospital output, whether measured in terms of services or outcomes, varies over time and across hospitals. These differences arise chiefly from the differing requirements for services of the patients in each hospital requirements determined by their disease and general physical condition. Not to take these variations into account is to fail to acknowledge differences in the medical care demands placed on hospitals. Thus, the first requirement of research that would com- pare the services or outcomes of hospitalized patients is to take into account differences in the demands placed on hospitals and, by implica- tion, differences in the products provided. Several approaches have been used to take into account differences in services required or provided. Berry (1967) attempted to group hospitals according to similarities in their facilities, admitting that this approach represented "a second-best approximation to grouping by product homo- geneity." In a similar approach, Carr and Feldstein (1967) grouped 3,147 hospitals into service-capability groups as measured by the num- ber of facilities, services, and programs offered. Both of these studies attempted to control for product in order to examine the effect of hos- pital size on care costs. This approach to controlling for product dif- ferences has been criticized by Lave and Lave (1971) and by Jeffers and Siebert (1974, p. 295), who note, "The mere availability of services and facilities reflects neither rates of utilization nor the intensity with which services are rendered." Roemer et al. (1968) proposed a severity ad- justed mortality rate in which length of stay was proposed as an indi- cator of severity of illness. This approach has been employed in several studies of quality of hospital care (e.g., Neuhauser 1971, Roemer and Friedman 1971), but has also been strongly criticized on both conceptual and empirical grounds (see Goss and Reed 1974~. A different approach is represented by the work of Feldstein (1967), who approached the problem by assessing the proportion of cases in a broadly defined set of

262 PAPERS diagnostic categories in each hospital in his study of 177 nonteaching British hospitals. Lave and Lave (1971) used the same approach in their study of 65 western Pennsylvania hospitals. Diagnostic categories em- ployed included 17 broad categories as well as a more detailed set of 48 categories, which revealed a considerable amount of variation among hospitals in the types of patients treated. Other studies have narrowed attention to focus on a relatively few diagnostic categories and have gathered other data on patient characteristics having a potential bearing on outcome. Thus, in the National Halothane study involving 34 hos- pitals where attention was restricted to surgical patients, adjustment variables included type of operation, age, sex, previous operations, physician status, and year in which surgery was performed (Bunker et al. 1969~. And in earlier research conducted by the Stanford Center, attention was restricted to the study of 15 surgical categories with quite detailed and varied data being collected on patient condition prior to surgery (Stanford Center for Health Care Research 1974, 1976~. In the final section of this paper, a more elaborate attempt to adjust service and outcome measures for patient differences will be described. PROCESS VERSUS OUTCOME MEASURES OF QUALITY OF CARE Process measures of care quality are much more widely employed than outcome measures. Providers generally prefer process measures, partly because care processes are more fully under their control than are out- comes, which reflect the operation of factors other than the care and attention with which services were administered. As noted, the recently mandated PSROS, which are staffed by physicians, emphasize process measures of care quality. By contrast, clients and consumer representa- tives usually prefer outcome measures in assessing quality. What does it matter to the patient how well the activities were carried out if they were ineffective in producing the desired effects? As already noted, process measures are based on the assumption that what constitutes good practice is already known. Such an assumption is rarely justified in carrying out complex and uncertain work, such as is represented by surgical care. A careful study by Brook (1973) that compared process and outcome evaluations by physicians of the treat- ment of three medical conditions-urinary tract infection, hypertension, and ulcerated lesion in the stomach or duodenum reports little associa- tion between judgments based on the two sets of criteria. The two types of measures are by no means interchangeable. And a study by Payne et al. (1976) based on a retrospective review of the medical records of patients discharged from 22 short-term hospitals in Hawaii compares

Measuring Outputs in Hospitals 263 quality of care process as assessed by the PE! for 16 diagnostic cate- gories with quality of care based on outcome measures. Both mortality and specific types of intermediate outcomes defined for each diagnostic type were employed to assess outcome with more reliance being placed on the latter measures. The authors conclude (p. 31) "Overall, the correlations between good physician performance and good outcomes were more often in the 'right' direction, but they were not often statisti- cally significant." Those concerned with controlling the costs of medical care have yet another reason to be wary of placing exclusive reliance on process mea- sures of quality. Based on his comparison of the two methods, Brook (1973, p. 57) cautions: . . Regulation based on process data is likely to have the effect of increasing the number of tests, diagnostic procedures, etc., which will, in turn, increase the cost of medical care. It has been shown in this study that correlation between process and outcome for most parameters was non-signif~cant and for others it was weak. Consequently, regulation on the basis of process information will in- crease costs but is unlikely to improve the component of health under control of the medical care system. SUMMARY OF A STUDY OF SERVICES AND OUTCOMES IN HOSPITALS In this final section, I wish to briefly summarize some work recently completed by our research team at the Stanford Center.2 While this work has many limitations, it does attempt to remedy some of the major deficiencies associated with prior research in this area. Most important among these are (1) a careful attempt is made to adjust both services and outcomes for differences among patients, (2) direct measures of several types of diagnostic and therapeutic services are developed, and 2This research was carried out by the Stanford Center for Health Care Research under contract HRA 230-75-0169 with the National Center for Health Services Research, Health Resources Administration, U.S. Department of Health. Education and Welfare. The Commission on Professional and Hospital Activities (CPHA) of Ann Arbor, Michigan. collaborated with the Center to provide data for the study. These data were supplied by CPHA only at the request and upon the authorization of the hospital whose data were used. Any analysis, interpretation, or conclusion is solely that of the Center, and CPHA expressly disclaims any responsibility far any such analysis, interpretation. of conclusion. A full report of' the study appears in Stanford Center for Health Care Research (1977). This summary draws heavily on the paper by Flood et al. (1978).

264 PAPERS (3) services and outcomes are independently measured so that the as- sociation between them can be empirically assessed. DATA AND METHODOLOGY The data used in this study involve over 600,000 patients treated during the 4-year period 1970-1973 in 17 acute care hospitals in the United States. These hospitals had all participated in a prospective study of organizational factors affecting quality of surgical care carried out by the Stanford Center for Health Care Research (19741. These study- hospitals were selected Tom among the 1,377 hospitals participating in the professional activities study (PAS) of the Commission on Professional and Hospital Activities (CPHA) in 1973. The PAS iS a medical record abstracting service for hospitals, which collects and summarizes selected information for every patient discharged from one of CPHA'S member hospitals. Sixteen of the study-hospitals were selected as a stratified random sample of all short-term nonfederal voluntary hospitals partici- pating in PAS; the seventeenth, administratively linked to one of the study-hospitals selected, agreed to participate at its own expense. The sample was stratified to insure differences in size, teaching status, and expenditures per patient day. Ten states and all major geographic regions within the continental United States are represented in this sample. Compared to all hospitals of a similar type in the United States, the study-hospitals are larger than average: 237 average daily census compared to a national average of 124. Six of the study-hospitals, or 35 percent, were affiliated with a medical school or had an active resi- dency training program compared to 28 percent nationally. Costs of care were quite similar: $113 per patient day compared to $115 for the na- tional average at the time of the study. All patient data were based on the information contained in the PAS abstract record, which was available for each of the approximately 670,000 patients discharged from the study-hospitals during the period May 1970 to December 1973. The final set of study-patients used in these analyses was approximately 603,000. Virtually all of the patients excluded from these analyses were newborns. Data from the patient's abstract record provide the basis for the measures of services including the types and amount of diagnostic and therapeutic services received during the hospitalization and the length of stay, as well as the mea- sures of patient outcome principally, death in hospital. In addition, information relating to the patient's disease and physical condition was used to adjust measures of service intensity and outcome for differences in patient mix.

Measuring Outputs in Hospitals THE MEASURES 265 The principal measures used in these analyses involve attempts to in- dependently assess care outcomes and services. Outcomes of Care In order to examine the effectiveness of services provided to patients in hospitals, the outcome of care, in-hospital death, is measured. The out- come measure is adjusted to take into account patient disease and physical condition as described below. Measures of Services Service intensity refers to the quantity of services received by a patient during a hospitalization episode. While many kinds of patient care services provided by hospitals could be enumerated, we were limited to those services recorded on the PAS abstracts. From these data we identi- f~ed eight distinct measures of services: seven of these measures indi- cated important diagnostic and therapeutic services. In addition, dura- tion of services, measured by the number of days of hospitalization, was available. Whenever possible, the amount of each individual service consumed was assessed, e.g., the number of classes of drugs received or the number of operative procedures undergone. In some cases, informa- tion was limited to use or nonuse of a service. In addition to these individual measures, three composite measures of service intensity were developed. The intent of each composite mea- sure was to reflect three dimensions of services: (1) the mix of different types of services utilized, (2) the amount of each individual service re- ceived, and (3) the relative costliness of the different types of services used. The first composite measure, service intensity, was based on the mix of specific diagnostic and therapeutic services received during the hospitalization including the use of special care units. The second com- posite measure, service duration, was based upon the length of stay in days and reflects the amount of basic nursing and hotel services pro- vided during the hospitalization. The third composite measure, overall services, combined the first two composite measures to reflect both service intensity and duration. In combining the individual measures to form these three composites, each received a weight based upon the proportion of hospital charges appropriate for the type of service being measured. These weights were developed from data on hospital charges supplied by a non-study-hospital. They do not reflect the actual cost

. - ~ ~- ~ 'e us - o ~ Id to o ~ - ~- Em ~ o=~= ~ of ~ - v, . - If . - EM If 'e c, . - 50 a] o v, o o .= ~4 . - a: v, . - s~ v, o A, If it . - c~ to c} ~7 5) Hi: If it: Go Do ~ ~ ~ ~ - ~us 0 ~ Do ~ ~ ret0as0 . . . . . . .... . ret Do ~ ~ cat Do ~Do-Do - on . - u, : - If c} ·- (~7 ~o · · p C~ >, _ ~ C =: ~u, ~ Em ~ ~ ~ o:- t I ~o ~o C C.,-t O ~· As u o ~ 0 5 X X V O V · ' - ~- A - ~O C) ~ e' ~ V '_ ~ '~ ~ ~ ~ ~ ~ ~ ~ :- ~ ~o £ ~ ~ ,.D ~ ~ .D ._ ,~ ~ o O tD Z Z C., Z <t ~ ~ ~ ~O == O ·Q ·O o£ ~ ,= ~o£ z £ V, V ~r 266 C~ £ o .~ V, : - ,_ C~ - ._ U' ~n V, C~ C) ._ ; - ,' ._ ~4 Ct o 5 ._ _ C~ = ._ ^' C) ·_ e~ C~ e~ . - O _ O ,7F .0 _ _ O C) Ct _

Measuring Outputs in Hospitals 267 variations observed in each study-hospital but were uniformly applied for all hospitals. They do reflect the average relative costliness of pro- viding a given type and amount of' service. The individual measures used in each composite measure and their respective weights are sum- marized in Table 1. For each individual measure of' service intensity and for the com- posite measures, adjustments are made based upon the types of' patient diseases and physical conditions. This adjustment procedure is detailed below. For the remainder of this paper, the analysis of' service intensity is limited to the adjusted composite measures of' services. For both the outcomes of care and the measures of' services, adjust- ment is made for the outcome or service that is expected on the basis of' knowing the patient's health-related characteristics. This standardiza- tion is computed at the individual patient level by detailed analysis of' the individual records of' the 603,000 patients. The process is essentially the same for both outcomes and service intensity, but we describe the technique using service intensity as an example. The standardization procedure involves an empirical estimation, using: linear regression, of the average amount of' services that would be pro- vided during a typical hospitalization for a patient of' a given type. The type of' patient is defined on the basis of' knowing the major diagnosis explaining the admission to the hospital (one of' 349 diagnostic groups and various indicators of' the patient's condition and treatment record, including additional diagnoses, admission test findings, the severity of' any operations undergone, and certain demographic characteristics such as age, sex, and a height-weight index. In all, over 40 variables idol adj ustment were used. Details of' this procedure and technique for standardization have been described elsewhere Stanford Center idol Health Care Research 19771. After obtaining the estimates of' the services expected, a comparison is made with the services actually received. In the analysis presented here the dit'l'erence between these two scores is used t'or the comparison. Thus, for a given patient, we have a measure of' the excess or deficit amount of' services received after takin`g into account his disease and condition. The dit'l'erence scores obtained are then averaged over all patients in a study-hospital. The measures of' service intensity used to characterize hospitals in this analysis are the average departures throne the service intensity that would be expected on the basis of' patient mix at the study-hospital. Likewise, the measure of' outcome used to cha~ac- terize hospitals is the discrepancy between the observed in-l~ospital death rate and the rate expected on the basis of' patient nix.

268 RESULTS: TIME TRENDS PAPERS To examine differences in services and outcomes over time, patients from the 17 hospitals were aggregated by year of treatment, ignoring hospital. Examining first the changes for the 4 years in the service in- tensity component measure, the observed (unstandardized) values in- creased approximately 6.3 percent over the study period, which is equiva- lent to an annual increase of 2.1 percent. The expected values for this component also increased during this period by 3.2 percent, or 1.1 per- cent annually, so that the standardized rate increased at a slower pace than that of the observed value (3.1 percent, or 1.1 percent annually). Thus, one half of the increase in the specific services component was due to increases in the "needs" of the patient population, while the remaining half was due to increases in services beyond the increase pre- dicted by patient mix changes. Almost all of the individual measures making up the component exhibited an increase over the 4-year period: the largest single increase was recorded for the use of special care units, which showed a 3.3-percent annual increase. Service duration measured by length of stay decreased on the average by 0.38 days, from 8.61 days in 1970 to 8.29 days in 1973. This is a total decrease in crude length of stay of 4.4 percent, or 1.5 percent annually. Since expected length of stay was increasing by 3.4 percent during the same period, standardized service duration showed an even greater decrease: 7.5 percent for the period, or 2.6 percent annually. This is equivalent to approximately a 0.66-day shorter average length of stay after accounting for the changing composition of the hospitalized population. The overall services component provides a weighted combination of service intensity and duration. The observed values increased very little (annual rate of 0.2 percent), while the expected values increased some- what more steeply (annual rate of 1.1 percent), so that the net effect was that the standardized measure actually decreased 2.6 percent over the 4-year period, or 0.9 percent annually. Since this downward trend in adjusted services is in conflict with much of the literature cited in the previous sections of this paper, we note that our measures of service inputs are per hospital episode rather than per patient day. When services are calculated on a patient day basis, the trend in services is upward at a rate of 3.6 percent per year. Finally, we turn briefly to time trends in outcomes. Over the 4-year period, 3.2 percent of the study-patients died in the hospitals. The crude death ratios decreased montonically over this period, from 1.019 to 0.991, while the expected death ratios increased by 4.1 percent. The net

Measuring Outputs in Hospitals TABLE 2 The Relation Between Service Intensity, Duration, and Outcomes in Hospitals 269 Composite Measures Services Outcomes, of Service Intensity SI SD OS In-Hospital Death Service intensity (SI) - - 0.27 (0.18) 0.20 (0.67)* - 0.43t ( - 0-03) Service duration (SD) - 0.90* (0.85)* 0.64* (0.59)t Overall services (OS) - 0.45t (0.43)t Measures are based on 603,000 patients treated in 17 hospitals. Entries are zero-order Pearson product moment correlations. Main entries are measures which have been stan- dardized to remove patient mix differences. Entries in parentheses are crude rates of services and outcomes. *Here p c 0.01 for two-tailed test; sample size 17. tp c0.10. Up c0.05. effect of the opposing trends in crude death rate and expected death rate resulted in a statistically significant decrease of 2.3 percent per year in standardized death rate. RESULTS: HOSPITAL DIFFERENCES AND THE RELATION BETWEEN SERVICES AND OUTCOMES Combining the data over the 4 years, we next aggregated patients by hospital. When differences among hospitals in service intensity were examined, it was found that even after adjusting for differences in the patient populations served by the hospitals, substantial differences among the hospitals remained. The coefficient of variation for dif- ferences in service intensity among hospitals was 7 percent; for service duration it was 14 percent; and the coefficient for the overall composite measure was 8 percent. Substantial variation was also found among hospitals in adjusted death rates. The coefficient of variation was 14 percent, indicating a twofold difference between hospitals at the two extremes of the range. Before examining the relationship of service intensity to outcome, we note first the interrelationship of the standardized composite measures of services for hospitals (see Table 2~. There was a tendency for service intensity to be associated with shorter duration of service ~-0.27~. The measure of overall services bore a slight relationship to service intensity

~ o ~ ~ - a) ~ . - ~ a) ~ c-) ·- . - ~ ~ o a, ~ cD .= oo ~ o o 1 . ~ ~ - ~ a, . - Q o S 00 _ ~ 00 CO ~ LO . . O O .. .. ~a, ._ ~ <D O a) ~N O .. OO . . ._ 0 = S a) o . . E ~- o -_, _,` ~, ~= C~ C~ ~ .- o ~ ~ ~ o'oo~'~ oo oo' . . j ~:5 . ~ ~5 C,7 - 4, ~n ~n ~ ° , s 4 - o ~ ~ ^ o a) _ ~ ~ . O ~ a) ~ s. o o ~ o ~ . . . . . o o o o o 1 1 . . 4 - . _ Q o S . . . . a, a, ~ . . ~5 ._ Q o S . . . _ r~ ~ r~ ~ oo co ~ ~ ~n ~ ~ c~ UD ~ -, ~ ~ ~ O ~ ~ O O O O O O O O O O O' O O' ~ Illll ~ llI' ~ .. .. ~n i~ ~ a a, ._ ~ v=, ~ ~ O ~ CO - 4 - ~, O S a, ~ a,' ._ ~ 4- ~ a) O Q O X S 2 ~ a) S LL ~ . . . Q o S . . . a) ~ E ~ . . C~ C~ ._ Q o S . . . ~ a a 4 - .. .. < ~ ~s ~a ._ o S a) ~ O . . O O 1 a ~5 1 ~1 1u ~ / ~ q,' / cn ~ l a C~ . _ a C~ . _ 4, ._ c~ a~ a, Q Q X a., a, O ~ C S C t- C) . _ a .o ~ a ._ 4_ C' Q, ~ Q Q X a) ~ ' ~ S IL 4_ 270 ._ ~ ~ ._ a a o ~ ~ . _ . _ ~ . . ._ . o .= s~ =: ~ ~ Ct Ct s: Ct C~ . _ s~ V, o ~ .,, ~ o ·- U) ~ Ct .~ rD ~V Ct ~ ~ _ Ct o ~ ,' 50 o V' ° . ~ C) s~ ._ · {.) ._ V' _ C~ ~ Ct C) ~ ._ ._ :> (/, o ~' c: ._ ~ C~ ~ ~ C.) Ct ~ ~ ~4 D Ct .S ,~ · _ C Ct ~ . _ C~ ~ V' ~ V' £ ~ ~ o 3 ._ ~ ~ o ~ ~ .o . s~ · ' :- ~ o ~ , . u) ~ Q) ~? · 13 ~- . ~ ~ :- & . ~o o ct .- v' ~ _ ° a> . ~ c~ ~ 5_ ·~4

Meas u ring O u tp u is in Hosp ita Is 271 (0.20) and a very strong association with service duration (0.90~. To indicate the impact of the standardization procedure, Table 2 also pre- sents, in parentheses, the Pearson correlations for the unstandardized measures. We note that the crude measure of service intensity bore a slight positive association with service duration (0.18) and a strong relationship to overall services (0.67~. Again, service duration was strongly associated with the measure of overall services (0.85~. Table 2 also reports the correlation of standardized service intensity measures to standardized outcomes. Here we note that a higher level of service intensity was associated with a lower death rate ~-0.43~. In contrast, longer service duration was associated with an increased death rate (0.64~. Not surprisingly, given the relationship of the composite measure of overall services to its two components, specific services and routine services, the overall services paralleled the finding for service duration but was not as strong (0.45~. All correlations between stand- ardized services and outcome were significant at the 10-percent level or better for a sample size of 17. By contrast, examination of the un- standardized measures for services, presented in parentheses in Table 2, reveal no relationship between crude rates of service intensity and death rate ~-0.03 in contrast to-O.43 for the standardized measures). The relation between service duration and death rate was in the same direc tion as seen in the standardized measures but smaller in magnitude: 0.59 compared to 0.64. Again we see that the results are strongly in- fluenced by whether the analysis was performed on standardized or crude data. Taking into account the condition of the patients treated did affect the nature of the relations observed between service intensity and outcomes. In Figure 1 the standardized death rates for the 17 hospitals are dis- played, classified by whether the services were greater or lower than expected, using both service intensity and duration to classify the hos- pitals. Looking first at the marginal effects, the eight hospitals pro- viding higher service intensity than expected had a mean standardized death rate of-0.28 in contrast to a mean of 0.28 for the nine hospitals with lower intensity. (The overall mean for standardized death rate for the 17 hospitals was 0.01 with a standard deviation of 0.52.) The hos- pitals that provided shorter duration of services than expected had a mean standardized death rate of-0.35 in contrast to a mean of 0.43 for the eight hospitals with longer duration than expected. These mar- ginal effects follow the findings reported in Table 2. Turning to the main cell entries of Figure 1, the two effects of service intensity appear to combine strongly; the four hospitals with the best (lowest) standardized death rates are among the five hospitals in the

272 PAPERS cell with higher service intensity and shorter service duration. And the five hospitals with the worst standardized death rates appear without exception in the cell with lower service intensity and longer service duration. Consistent with the effect reported in Table 2 and in the margins of Figure 1, the effect of standardized duration was stronger than that for service intensity. That is, the hospitals ranked fourth, sixth, seventh, and eleventh best in terms of their standardized outcomes were in the cell with shorter service duration and lower service intensity, while the hospitals ranked ninth, tenth, and twelfth best were in the cell with longer service duration and higher service intensity. These two dichotomized measures provide nearly perfect discrimination of hos- pitals into the four cells of increasing standardized death rates. However, an additional factor must be considered that may be ac- counting for the observed results. It has been well documented that length of stay in hospitals is influenced by differences in practice among geographical regions. Because of the nature of our sample, we can investigate the possibility that regional variation accounts for the rela- tionship observed between the hospital-level measures of services and outcomes. All four major census regions of the continental United States are represented in our sample, but since only one hospital was located in the north-central region, we have combined it with hospitals in the northeast region, resulting in three regions for examination: north, south, and west. One-way ANOVAS testing for regional effects did reveal significant differences among regions for standardized duration of services and for standardized outcomes, but not for standardized mea sures of specific services. Directly paralleling the procedure used for Figure 1, we next investi- gated the relationship among these measures within each region. Stan- dardized duration of services was strongly related to region, so that eight of the nine hospitals in the north exhibited longer length of stay than ex- pected, all five hospitals in the west exhibited shorter length of stay than expected, and all three hospitals in the south exhibited shorter length of stay than expected. However, within each region, standardized service intensity was perfectly related to standardized outcome, with hospitals exhibiting higher than expected levels of services achieving better than expected outcomes. In short, it would appear that most calf the impact of duration of services on outcomes is due to regional variation in medical practice. But within each region the intensity of specific services provided perfect discrimination among the standardized death rates of the hospitals, with higher specific services being associated with better outcomes.

Measuring Outputs in Hospitals CONCLUDING COMMENTS 273 Using the criterion of improvements in outcomes as measured by stan- dardized mortality rates, the study just summarized suggests that there has indeed been increased productivity in the hospital industry. Not only are higher costs associated with more elaborate and diverse ser- vices provided to patients-a relationship well documented but it appears that more elaborate and diverse services are also associated with better outcomes for patients as assessed by improvements in their health status. Probably the most severe limitation of the study just reviewed is the restriction of the outcome measure to in-hospital death. (This is not to suggest that there is not much room for improvement in the measures of hospital services. ~ Many other relevant measures of outcomes might have been included had the data base permitted. In an earlier study conducted by the Stanford Center for Health Care Research (1974), several alternative outcome measures were examined including morbidity at seven days postsurgery, return to function at 40 days, and mortality- including death outside of the hospital at 40 days. As described else- where (Scott et al. 1978), these several outcome measures were not highly intercorrelated and did not show the same pattern of response when related to various explanatory variables. Like Brook (1973, p. 59), we were forced to conclude that "the results of an assessment of quality of care will vary with the method used to measure it." This is why my strongest recommendation is that there should be no premature selection or foreclosing of any of the alternative measures of outputs of health care systems. Although we have made some progress in recent decades, we do not yet know enough about the operation of these systems to either measure precisely their salient outcomes or to assume that we know what processes are accounting for them. REFERENCES American Hospital Association (1972) Quality Assurance Program for Medical Care. Chicago: American Hospital Association. Berry, Ralph E., Jr. (1967) Return to scale in the production of hospital services. Health Services Research 2(Summer):123. Brook, Robert E. (1973) Quality of Care Assessment: A Con~pariso'' of~Five Methods of Peer Review. Bureau of Health Services Research and Evaluation, DHEW publication no. HRA-74-3100. Washington, D.C.: U.S. Department of Health, Education and Welfare.

274 PAPERS Browning, M. H., ed. (1974) The Nursing Process in Practice. New York: American Journal of Nursing Company. Bunker, John P., Forrest, William H., Jr., Mosteller, Frederick, and Vandam, Leroy D. (1969) The National Halothane Study. Washington, D.C.: National Institute of General Medical Sciences. Carr, W. John, and Feldstein, Paul J. (1967) The relationship of cost to hospital care. Inquiry 4(June):45-65. Carter, J. H., et al. (1972) Standards of Nursing Care: A Guide for Evaluation. New York: Springer. Donabedian, Avedis (1966) Evaluating the quality of medical care. Milbank Memorial Fund Quarterly 44(Part 2, July): 1 66-206. Feldstein, Martin (1967) Economic Analysis for Health Service Efficiency. Amsterdam: North-Holland. Feldstein, Martin (1971) The Rising Cost of Hospital Care. National Center for Health Services Research and Development. Washington, D.C.: Information Resources Press. Feldstein, Martin, and Taylor, Amy (1977) The Rapid Rise of Hospital Costs. Staff Report of Council on Wage and Price Stability, Executive Office of the President. Flood, Ann Barry, Scott, W. Richard, Ewy, Wayne, Forrest, William H., Jr., and Brown, Byron Wm., Jr. (1978) The Relationship Between Intensity of Medical Services and Outcomes for Hospitalized Patients. Paper presented at the Pacific Sociological Associa- tion, Spokane, Washington, April. Fuchs, Victor R. (1968) The Service Economy. New York: Columbia University Press. Fuchs, Victor R. (1974) Who Shall Live? Health, Economics, and Social Choice. New York: Basic Books. Goss, Mary E. W., and Reed, J. E. (1974) Evaluating the quality of hospital care through severity-adjusted death rates: some pitfalls. Medical Care 12(March):202-213. Institute of Medicine (1976) Assessing Quality in Health Care: An Evaluation. Wash- ington, D.C.: National Academy of Sciences. Jeffers, James R., and Siebert, Calvin D. (1974) Measurement of hospital cost variation: case mix, service intensity, and input productivity factors. Health Services Research 9(4):293-307. Lave, Judith R. (1966) A review of the methods used to study hospital costs. Inquiry 3(2):57-81. Lave, Judith R., and Lave, Lester B. (1971) The extent of role differentiation among hospitals. Health Services Research 6(1):15-38. Lembcke, P. A. (1967) Evolution of the medical audit. Journal of the American Medical Association 199: 1 1 1- I 18. Mann, Judith K., and Yett, Donald E. (1968) The analysts of hospital costs: a review article. Journal of Business 41(April): 191-202. Neuhauser, Duncan (1971) The Relationship Between Administrative Activities and Hospital Performance. Research series no. 28. Chicago: Center for Health Administra- tion Studies. Payne, Beverly C. (1966) Hospital Utilization Review Manual. Ann Arbor: University of Michigan Press. Payne, Beverly C., et al. (1976) The Quality of Medical Care: Evaluation and Improve- ment. Chicago: Hospital Research and Educational Trust. Phaneuf, M. C. (1972) The Nursing Audit: Profile for Excellence. New York: Appleton- Century-Crofts. Roemer, Milton, and Friedman, Jay W. (1971) Doctors in Hospitals. Baltimore: Johns Hopkins University Press.

Measuring Outputs in Hospitals 275 Roemer, Milton, Moustafa, A. R., and Hopkins, Carl E. (1968) A proposed hospital quality index: hospital death rates adjusted for case severity. Health Services Research 3(Summer):96-118. Scott, W. Richard (1977) Effectiveness of organizational effectiveness studies. Pp. 63-95 in Paul S. Goodman and Johannes M. Pennings, eds., New Perspectives on Organiza- tional Effectiveness. San Francisco: Jossey-Bass. Scott, W. Richard, Flood, Ann Barry, Ewy, Wayne, and Forrest, William H., Jr. (1978) Organizational effectiveness and the quality of surgical care in hospitals. Pp. 290-305 in Marshall W. Meyer, ea., Environments and Organizations. San Francisco: Jossey- Bass. Stanford Center for Health Care Research (1974) The Study of Institutional Differences in Postoperative Mortality. Springfield, Va.: National Technical Information Service. Stanford Center for Health Care Research (1976) Comparison of hospitals with regard to outcomes of surgery. Health Services Research ll(Summer):112-127. Stanford Center for Health Care Research (1977) Studies of the Determinants of Ser- vice Intensity in the Medical Care Sector. Stanford, Calif. Suchman, E. A. (1967) Evaluative Research. New York: Russell Sage Foundation.

Next: Welfare Dimensions of Productivity Measurement »
Measurement and Interpretation of Productivity Get This Book
×
 Measurement and Interpretation of Productivity
MyNAP members save 10% online.
Login or Register to save!

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!