Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 151
Physician Staffing for the VA: Volume I 5 EXPERT JUDGMENT APPROACHES TO PHYSICIAN STAFFING INTRODUCTION Since the study's inception, it has been clear that expert judgment would be important in the formal development of a VA physician requirements methodology. The original statement of work noted that ''Because the available empirical data base alone is not adequate for driving the development effort or generating quantifiable estimates by purely mechanical numerical exercises, relevant informed professional judgments will be required throughout . . . and may well be an integral component of the physicians' requirements methodologies itself'' (Institute of Medicine, 1987). To implement this mandate, the committee was to appoint "advisory panels to broaden the base and range of experience and competence" brought to bear in the development of the methodology. In response, the committee established 11 advisory panels: data and methodology (central to the analyses in chapters 4, 7, and 8); affiliations (see chapter 9); nonphysician practitioners (see chapter 10); and six specialty and two clinical program panels, to serve as sources of professional judgment in the methodology's development. The six specialty panels were medicine, surgery (including also anesthesiology), psychiatry, neurology, rehabilitation medicine (including also spinal cord injury), and other physician specialties (encompassing laboratory medicine, diagnostic radiology, nuclear medicine, and radiation oncology). The committee also appointed two multidisciplinary clinical program panels in the areas of ambulatory care and long-term care. Each panel was composed of VA as well as non-VA representatives, with the former never constituting a majority. A central issue for the committee was determining the scope of the charge given to the specialty and clinical program panels. Two general approaches were considered:
OCR for page 152
Physician Staffing for the VA: Volume I In a physician requirements methodology relying primarily on the Empirically Based Physician Staffing Models (EBPSM), the panels would be asked to react to the estimated statistical models presented to them, evaluating their specification from a clinical perspective, and possibly modifying either the models themselves or their staffing recommendations. In a physician requirements methodology calling for a more balanced reliance on statistically based and expert judgment-based approaches, the panels would serve as the principal source of independently derived quantitative assessments of appropriate physician staffing. Under this second approach, the panels would not simply be critiquing and modifying statistical models, but would be rendering their own professional judgments about physician staffing levels consistent with high-quality medical care in particular clinical settings. These Full-Time-Equivalent Employee (FTEE) levels could then be compared with those emerging from the EBPSM for those same clinical settings. Under either interpretation, the panels would seek to develop external (to the VA) physician staffing norms, which would aid in the interpretation of statistically based as well as expert judgment-based results. The committee decided that the second, more expansive, interpretation of the panels' charge was the more appropriate. The committee could envision a structured process in which panel members (1) are shown either a statistical model, its physician staffing implications, or both; (2) are asked to determine whether the model leads to staffing consistent with high-quality care; and (3) if not, are asked to "manipulate" the model's estimated coefficients in some fashion to generate appropriate staffing results. However, there were several concerns about proceeding this way. First, with a single exception, the panels were constituted of expert clinicians with varying amounts of experience with formal statistical techniques; such a coefficient manipulation process would not make the best use of the collective expertise represented on the panels. Second, given only the estimated equations (as shown in chapter 4), on what basis would panel members be able to judge whether the resulting physician staffing levels were "appropriate?" That is, can a staffing level be judged as appropriate, or not, in the absence of facility-specific information to establish a concrete backdrop—a context for evaluation? Third, the methodological foundations for a coefficient manipulation approach have not been well established in the social science or statistical literatures. In contrast, the conceptual underpinnings and assumptions of the statistical analyses in chapter 4 are clear and well known. The expert judgment methods of decision making detailed later in this chapter, although not based on a rigorous, axiomatic approach, are nonetheless clear and unambiguous in their assumptions and implications. To enmesh the two approaches through a
OCR for page 153
Physician Staffing for the VA: Volume I coefficient manipulation process is to proceed down a methodological path whose theoretical underpinnings are not well established.1 The committee's recommendation for how to combine, or reconcile, the empirically based and expert judgment staffing results is a choice process model termed the Reconciliation Strategy (see chapter 6). Thus, each of the eight panels—in the course of two meetings in Washington, D.C., an extended conference call, and numerous mail and telephone communications with study staff—accomplished the following: Critiqued the empirically based models, offering recommendations about the choice of variables, data sets, and mathematical specification of the equations; Developed and evaluated external (to the VA) physician staffing norms; and Derived its own independent estimations of appropriate physician staffing in specific VA medical centers (VAMCs). The panels compared these results with those from the empirically based models and some external norm analysis. At that point, on the basis of the totality of evidence, the panels revised their staffing estimates accordingly. To accomplish the latter task, the committee had to define a panel process that was methodologically sound and capable of being implemented by the eight panels in a consistent, yet flexible, way. In the health arena alone, there have been a number of recent efforts to use expert judgment processes, in scholarly analyses as well as in forums for public decision making. In the next section, the most prominent recent applications of these approaches are reviewed, and their implications for an expert judgment methodology appropriate for determining physician requirements are discussed. In addition, details are given about what the specialty and clinical program panels accomplished in their approximately eight months of analyses in 1990. There is some discussion of their critiques of the empirically based models, which proved substantive and useful to the committee. However, this chapter focuses principally on the development of two alternative expert judgment approaches to estimating physician requirements: the Detailed Staffing Exercise (DSE) and the Staffing Algorithm Development Instrument (SADI). In addition, the process for constructing and evaluating external staffing norms is described. 1 In Volume II, Supplementary Papers, the committee discusses an alternative approach—Bayesian econometric modeling—for formally combining expert judgment and empirically based results to derive, through an integrated mathematical formula, physician staffing requirements. This Bayesian approach was not pursued in this study for important practical reasons; it remains of theoretical interest and could be implemented under certain circumstances.
OCR for page 154
Physician Staffing for the VA: Volume I THE PANEL PROCESS-IN THEORY In designing a process by which the six specialty and two clinical program panels would operate, the committee faced two major methodological questions. By what means and in what form would expert judgment be elicited? How would the judgments of individual panel members be combined to reach consensus positions? Before the committee's own choices are discussed, the strategies of others in this area are reviewed. There is a growing literature on the formal use of expert opinion in health care policy and research. These applications sometimes involve the estimation of model parameters for which objective data are either missing or inappropriate. More often, expert judgment is used to reach decisions either about the advisability of particular decisions that are intermediate to a final policy outcome, or about the advisability of the outcome itself. Scheme For Eliciting Judgments Although there are a number of variations on the theme, methods to elicit expert judgment in a way that leads (eventually) to consensus positions can be grouped into three broad categories: the "pure" Delphi method, group interactive methods, and modified Delphi approaches. "Pure" Delphi Method Panel members render judgments individually and anonymously, typically through self-administered questionnaires. The elicitation continues through several iterations. After each elicitation, the individual judgments are collected, analyzed, and fed back to all members so that each can see where he/she stands in relation to the others. The elicitations continue until, in the judgment of the analyst, either a consensus has been reached or a "point of diminishing returns is reached" (Fink et al., 1984). The Delphi method offers several advantages. It encourages individual members to express views freely and impersonally; the opportunity is diminished for strong personalities to dominate the decision or for "group think" to lead to an artificial or premature consensus. Because the method does not require panel members to meet face to face, it can be conducted relatively efficiently and inexpensively by mail with spatially separated participants completing questionnaires on a flexible schedule. The major disadvantage with the "pure" Delphi method is that, because panel members do not interact, there is no opportunity for each to probe the
OCR for page 155
Physician Staffing for the VA: Volume I positions of others, defend his/her own position, and thus gain a richer understanding of the problem (unless they are able to communicate informally). Fink and colleagues (1984) cite a number of Delphi method applications in health. More recently, the Harvard-based team producing the Resource-Based Relative Value Scale (RBRVS) (Hsiao et al., 1990) has experimented successfully with several methods (including the Delphi) in developing a more efficient approach to estimating relative-value weights for surgical procedures. Group Interactive Methods Connoted here is any process in which panel members meet together, discuss information pertinent to the decision (including possibly their individual viewpoints and interpretations), and then attempt to reach a consensus. There are several variations on this theme. Panel members may be shown background materials in advance, as with the consensus development conferences sponsored by the National Institutes of Health (Kosecoff et al., 1987). Alternatively, information for the discussion may be first revealed, or even developed, during the meeting, as in applications of the nominal group process (see Fink et al., 1984). The discussion may be wide open, so that individuals and their viewpoints are easily linked, or structured so that viewpoints are elicited anonymously and discussed without attribution. The strengths and weaknesses of such group interactive methods are the reverse of the Delphi. The opportunity to exchange ideas can lead synergistically to conclusions in which more information has been brought to bear, in sum, than if participants had voted in isolation. But there is a risk that the outcome will be influenced by personality, meeting adjournment deadlines, and other factors that ought not to bear on the problem's resolution (although several variations of this method are designed to prevent this). Modified Delphi Approaches Several recent expert judgment applications have drawn selectively from both the Delphi and the group interactive approaches to evolve hybrid processes for eliciting information toward consensus development. Most of these can be usefully characterized as estimate-talk-estimate processes (see Gustafson et al., 1973; Ludke et al., 1990). Prior to their first meeting, panel members typically are asked to render initial judgments, anonymously and independently, based on information transmitted by the analyst. These results are submitted and displayed at the first meeting. Each panel member knows his/her position relative to the group as a whole but may or may not know how other individuals, by name, have voted.
OCR for page 156
Physician Staffing for the VA: Volume I Following discussion, the group votes again; depending on the format, this poll may or may not be anonymous. Again, the results are analyzed and displayed. The process continues until the analyst determines that either a consensus has been reached or else the costs of continuing outweigh the benefits. Such an approach draws strength from Delphi as well as group interactive methods. By first eliciting judgments anonymously, the analyst maximizes the amount of independent (judgmental) information brought to bear on the question. The opportunity to discover plausible "outlier" positions is enhanced, which reduces the chance that the subsequent consensus will be predicated on an overly restrictive conception of possible outcomes. By discussing these initial assessments in a group setting, each panel member can benefit from the views of others, thus bringing the maximum amount of (judgmental) information to bear on his/her upcoming reassessment. On the other hand, there is the concomitant risk that personality factors, adjournment deadlines, group-think pressures, or other extraneous matters will contaminate the group interaction part of the process. The effects of these factors can be reduced by maintaining the anonymity of the panel members' positions and by such practical steps as pacing the meetings so that ample time is allowed for discussion and voting. Studies in which a modified Delphi method has been applied include the assessments of U.S. physician requirements, by specialty, conducted initially by the Graduate Medical Education National Advisory Committee (GMENAC) (U.S. Department of Health and Human Services, 1981) and currently by the Council on Graduate Medical Education (COGME) (Buerhaus and Zuidema, 1990); the Effectiveness Initiative conducted by the Institute of Medicine to assist the Health Care Financing Administration in setting priorities for medical practice analyses (Institute of Medicine, 1989); a series of analyses to project faculty needs as well as the manpower required to care for the elderly in future decades, based at the University of California at Los Angeles and RAND (Reuben et al., 1990, 1991); a project conducted at the Iowa City VAMC examining the appropriateness of certain nonacute inpatient admissions to VA facilities across the country (Ludke et al., 1990); a portion of the RBRVS study cited earlier (Hsiao et al., 1990); and analyses conducted by RAND in recent years to determine appropriate clinical indications for performing various medical and surgical procedures (see, e.g., Park et al., 1986). Reaching an Consensus There are basically two ways of arriving at a group consensus. The participants may be formally polled and the votes aggregated in some fashion to yield a group choice, or the group may agree to hammer out a consensus position following discussions in which the relevant data and views of individuals have
OCR for page 157
Physician Staffing for the VA: Volume I been aired. Such a consensus may be explicitly declared to be unanimous, the impression may be left that it is unanimous, or dissenting statements or minority reports may be filed. The most prominent forum utilizing the second approach is the program of consensus development conferences sponsored by the National Institutes of Health (Fink and Kosecoff, 1984). A recent survey (McGlynn et al., 1990) indicates that government-sponsored consensus development conferences in eight other industrialized nations also shy away from formal procedures for achieving agreement. On the other hand, all other expert judgment applications cited earlier do use explicit decision rules to map individual judgments into a consensus position. Nearly all decision rules apply to one of three types of choice problems. The group must either (1) agree or disagree, or determine the extent of its agreement or disagreement, with one or more propositions; (2) develop a preference ranking for a set of items; or (3) produce quantitative estimates of variables or parameters for use in subsequent calculations, leading eventually to some research or policy conclusion. An interesting example of (1) arises in the RAND studies on clinical indications for intervention (Park et al., 1986). In this modified Delphi approach, panelists were asked, prior to their first meeting, to rate each possible clinical indication for a given intervention (e.g., endoscopy) on a scale of 1 to 9, with 9 meaning "extremely appropriate" and 1 meaning "extremely inappropriate." When the panelists met, they were shown the resulting frequency distribution of their ratings; each panelist could see where his/her score fell relative to the group. Following discussion, they were then asked to reevaluate the indications on the same 1 to 9 scale. Finally, whether the panel was in "agreement" or "disagreement" that a given clinical indication was appropriate was determined as follows: The high and low extreme scores were discarded, and the median of the remaining scores was computed. If these remaining scores fell within any three-point range on the nine-point scale, the panel was said to be in "agreement,'' with the median score indicating the relative degree of appropriateness/inappropriateness of the indication for the intervention in question. On the other hand, if at least one rating fell in the 1-3 range and at least one in the 7-9 range, the panel was said to be in "disagreement." Otherwise, the panel's position was said to be ''equivocal." An interesting, though not unexpected, result in three separate evaluations was that a panel's second ratings were closer to one another than the initial ratings, whether measured by the percentage of agreement, percentage of disagreement, or average dispersion of scores. An index of the latter is the mean absolute deviation (MAD) statistic, defined as Σ (Xi-Xmed/N, where Xi is the score of the ith panel member, Xmed
OCR for page 158
Physician Staffing for the VA: Volume I is the panel median score, and N is the number of panel scores used in the decision process. A broadly similar approach to decision making was used in the VA study examining the appropriateness of acute inpatient admissions (Ludke et al., 1990). An example of the second type of consensus choice problem is found in the IOM's Effectiveness Initiative study (Institute of Medicine, 1989), in which certain scoring rules were used to derive a priority ranking of clinical conditions for further research. When a panel is asked to derive a best estimate of a variable or parameter that can take on many (sometimes an infinity of) possible values, how should a consensus be defined? As it turns out, this is precisely the choice problem arising in the expert judgment models developed for the present study. In the GMENAC and COGME studies, expert panels estimated a number of parameters used in the calculation of the "adjusted need" for physicians (Buerhaus and Zuidema, 1990; U.S. Department of Health and Human Services, 1981). In GMENAC, the consensus value of any given parameter was the panel median estimate; to lend perspective, the high and low values were also reported. In COGME, a range of values are reported for each estimate of physician need or supply, and calculations involving these variables typically use the range midpoint values. In the small group judgment study recently conducted by the RBRVS project (Hsiao et al., 1990), panels of surgeons used a magnitude estimation technique to rate the relative amount of work required to perform a number of services. At each juncture in the process of rating each service, a median score was computed. A consensus was declared whenever all scores fell within a predetermined acceptable range of the median. Committee's Proposed Approach To Eliciting Expert Judgments and Reaching Consensus In light of these studies and policy applications, the committee initially determined that the specialty and clinical program panels' own estimates of appropriate physician staffing levels would be obtained through a process with the following operating characteristics. A modified Delphi approach would be developed in which panel members would independently estimate appropriate physician staffing levels (in the applicable specialty or program area only) at a selected set of actual VA facilities. These estimates would be tabulated by study staff and displayed anonymously to panel members when they next convened. In the course of discussions, it might become natural, or necessary, for individuals to become identified with their estimates, but this should evolve only as needed.
OCR for page 159
Physician Staffing for the VA: Volume I Following discussion of the first round of estimates, the panel would be asked to reassess physician requirements (in its specialty or program area only). These results likewise would be tabulated and displayed. In principle, the reassessments would continue until the members' physician FTEE estimates had—by some criterion—stabilized sufficiently that a panel consensus estimate could be declared. But how should a consensus be defined? Following each iteration of physician FTEE assessments by the panel, the median value would be computed and the high and the low values noted. By one reasonable definition, a consensus emerges when the median stabilizes. More formally, a consensus is declared on the ith iteration if the resulting median is within an acceptable range of the median obtained at the (i-1) iteration (the previous one). A stronger definition of consensus would require that both the median and the MAD statistic, measuring here the average dispersion of physician FTEE responses around the median, not change appreciably between assessment iterations. All else equal, this more stringent definition—requiring stability in the dispersion of assessments as well as their central tendency—is preferred. As will be seen shortly, the concepts underlying both the committee's preferred scheme for eliciting expert judgment and its preferred definition of consensus undergird the operations of the eight panels. Given this study's developmental nature and time constraints, however, the panels' consensus assessments of appropriate physician staffing—via both the DSE and the SADI—must be regarded as approximations of what would be obtained had these expert judgment processes been able to proceed through several iterations. Again, the panels' charge in this regard was to help the committee develop methods for staffing, not to render the final numbers on VA physician requirements. THE PANEL PROCESS-IN PRACTICE In this section the operation of the six specialty and two clinical program panels is described in terms of what turned out to be their major functional responsibilities: evaluating the EBPSM, developing and testing the DSE, developing and testing the SADI, and evaluating external (non-VA) norms to guide physician staffing decisions. The primary focus here will be on the DSE and SADI because they are new vehicles for deriving expert judgment estimates of appropriate physician staffing; as such, they played central roles in most of the panels' recommendations for how the VA ought to determine physician requirements. Although the planning for panel operations began early in the study and their interactions with the committee and the staff continued through the first six
OCR for page 160
Physician Staffing for the VA: Volume I months of 1991, the bulk of the activities described below occurred during the first 10 months of 1990. For expository purposes, it is useful to divide this period roughly into three phases: preparation for and conduct of the first panel meetings (January through April); preparation for and conduct of the second panel meetings (May through mid-August); and postmeeting activities (mid-August through October), culminating in a panel chairmen's session at the November 1-2, 1990, meeting of the committee. Before the panels' accomplishments are discussed, the procedures for appointing panel members are reviewed briefly. Appointment of Specialty and Clinical Program Panels The committee intended that the membership of each panel reflect a broad spectrum of clinical knowledge, professional judgment, and special technical expertise. Collectively, the physicians on each panel were selected to bring perspectives spanning a variety of clinical practice settings. It was understood from the beginning that the study would focus on the major specialty and program areas prominent in the VA; hence, the committee was constituted so as to have representation in these areas. It was natural that the chairs of the six specialty and two clinical program panels be drawn directly from the committee membership. The study's workplan called for each panel to consist of VA as well as non-VA members, with the latter constituting a voting majority in each case. In response, the committee asked the Department of Veterans Affairs to nominate VA staff candidates for panel membership. The VA liaison committee proposed candidates for each panel, and a list of nominees was subsequently submitted to the IOM by the VA chief medical director. Non-VA panel nominees were initially solicited from members of the study committee. Additional nominees were drawn from the IOM membership, in consultation with the director of the Division of Health Care Services and the IOM executive office. After all nominations were received, a tentative panel roster (of non-VA and VA candidates) was submitted to each panel chairman for review. Each chair could propose additional nominees. The final selection of VA and non-VA members was made by the panel chairman in consultation with the chairman of the committee. (A complete set of panel rosters is contained in Appendix A of this report.)
OCR for page 161
Physician Staffing for the VA: Volume I Evaluating the EBPSM The specialty and clinical program panels provided important critical advice to the data and methodology panel and the committee about several aspects of the EBPSM: Selection of Variables For Multivariate Regression Equations At its first and second meetings and during the postmeeting period, each panel was shown various specifications of empirically based models pertinent to its specialty domain or program area. Each was asked to address several questions: Is workload defined appropriately? Are the physician FTEE variables properly constituted? Do the variables included in the equations make clinical and organizational sense? Did the variables perform as expected statistically? For coefficient estimates that are not statistically significant, or that are significant but with the "wrong" algebraic sign (indicating "perverse causality"), what factors might be at work? Are there variables currently omitted from the equations that should be tested on clinical or organizational grounds? During the first and second panel meetings, the panels' empirically based model critique focused entirely on the production function (PF) variant. The inverse production functions (IPFs) did not begin emerging until the postmeeting period and were then evaluated by the panels at two junctures: first, via mail communications with study staff during late August; and, second, during the conference calls with staff in late October. In the course of these meetings, written communications, and phone calls, panel members contributed numerous suggestions on improving the empirically based models (including the sentiment, expressed on occasion, that the models be discarded entirely in favor of an expert judgment approach). There was not a panel whose empirically based models were not significantly modified as a result of these give-and-take discussions.
OCR for page 210
Physician Staffing for the VA: Volume I Physician Time per Visit Type of Visit High Low Mean Median New Patient Visit with NP or PA 1.00 0.33 0.67 0.70 Follow-Up Visit No Resident 0.33 0.25 0.30 0.33 Follow-Up Visit with Resident 0.33 0.08 0.22 0.25 Follow-Up Visit with NP or PA 0.33 0.08 0.25 0.25
OCR for page 211
Physician Staffing for the VA: Volume I SECTION B: NON-PATIENT-CARE ACTIVITIES Part 1. The activities listed below generally do not occur every day, but may be time-consuming when looked at over a longer period, such as a week or month. List the time in hours that you would add to each physician's average workday to allow for the types of work other than direct patient care listed below. Chart 9 Assume the amount amount of research accomplished at this VAMC is: High1 Medium1 Low1 Physician Hours/Workday: High Low Mean Median High Low Mean Median High Low Mean Median Education of residents (didactic, classroom, not on the PCA): 1.00 0.30 0.42 0.45 1.00 0.30 0.42 0.45 1.00 0.12 0.32 0.45 Administration by Chief (time required to manage your whole service by a Chief and/or Assistant Chief): 7.00 3.00 4.00 3.30 7.00 2.30 3.55 3.30 7.00 1.00 3.25 3.30
OCR for page 212
Physician Staffing for the VA: Volume I Assume the amount amount of research accomplished at this VAMC is: High1 Medium1 Low1 Physician Hours/Workday: High Low Mean Median High Low Mean Median High Low Mean Median Administration by Others (time required for individual physicians): 1.00 0.05 0.25 0.40 1.00 0.05 0.25 0.40 1.00 0.05 0.25 0.40 Hospital-Related Activities (mortality and morbidity, quality assurance, staff meetings): 1.00 0.35 0.40 0.35 1.00 0.25 0.40 0.35 1.00 0.25 0.35 0.30 Total Hours per Average Workday: For Chief For Non-Chief For Chief For Non-Chief For Chief For Non-Chief Overall Mean 4.0 1.8 3.9 1.9 3.4 1.6 Overall Median 3.5 1.5 3.5 1.8 3.5 1.5 1 Examples of research level by total amount of funding (VA plus non-VA) in fiscal year 1988: High—VAMC I with $8.8 million in total funding; Medium—VAMC II with $2.75 million in total funding; Low—VAMC III with about $176,000 in total funding.
OCR for page 213
Physician Staffing for the VA: Volume I Part 2. In order to determine the actual staffing in this hospital, the number of FTEE must be adjusted to allow for continuing medical education, research, and leaves of absence. What do you believe to be the appropriate percentage of time the ''average'' (typical) member of your service should devote to each of the following categories of non-patient-care-related activities? Chart 10 Assume the amount amount of research accomplished at this VAMC is: High1 Medium1 Low1 Percentage of Physician Time: High Low Mean Median High Low Mean Median High Low Mean Median Continuing Education: 15.0 1.5 7.4 8.0 15.0 1.5 7.4 8.0 10.0 1.5 6.2 6.0 Research (off the PCA): 50.0 30.0 36.3 34.0 30.0 20.0 23.3 23.0 15.0 0.0 7.5 7.5 Vacation, Administrative Leave, Sick Leave, Other: 15.0 8.0 12.5 13.0 15.0 8.0 12.5 13.0 25.0 8.0 14.0 13.0 Total Percentage of Time: Mean 55.6 43.3 27.9 Median 54.0 44.3 26.8 1 Examples of research level by total amount of funding (VA plus non-VA) in fiscal year 1988: High—VAMC I with $8.8 million in total funding; Medium—VAMC II with $2.75 million in total funding; Low—VAMC III with about $176,000 in total funding.
OCR for page 214
Physician Staffing for the VA: Volume I Figure 5.3 Application of the SADI to Compute Physician Requirements in Medicine at VAMC I1 FOR SECTION A: PATIENT CARE ACTIVITIES Medicine Inpatient PCA Admissions Physician hours is the product of admissions per day and the panel's median estimate of physician time per admission, given resident availability. The former is supplied by the VAMC; the latter is from Chart 1 of Figure 5.2. 15 Adm/day × 0.50 hr/Adm = 7.50 hr (Wards) 1 Adm/day × 0.50 hr/Adm = 0.50 hr2 (Intensive Care) Subtotal for Admissions = 8.00 hr Routine Care Based on the overall median estimates from Charts 3 and 4 of Figure 5.2. In each instance below, the required physician time estimate could not be read directly from the charts, but had to be derived by interpolation, extrapolation, or some other mapping process. Ward 1: ADC = 26: 5.08 hr3 Ward 2: ADC = 31: 5.10 hr3 Ward 3: MICU w/ADC = 6: 3.07 hr4 Ward 4: CCU w/ADC = 6: 3.07 hr4 Ward 5: Bone Marrow Transplant Unit (BMTU) w/ADC = 5: 2.63 hr5 1 Since VAMC I is a highly affiliated, research-intensive facility, all physician time estimates assume resident availability. All workload-related data are taken from the medicine DSE developed for VAMC I and are based on information reported to study staff by officials at the facility. 2 Assumes admission work-up time same as for medicine wards. Admission times taken from Chart 1 of Figure 5.2. 3 Estimate based on extrapolation of overall median values found in Chart 3 under Routine Daily Patient Care in Figure 5.2. 4 Estimate based on linear interpolation of overall median values found in Chart 4 under Routine Daily Patient Care in Figure 5.2. 5 Estimate derived from ICU/CCU times found in Chart 4 under Routine Daily Patient Care in Figure 5.2, since neither the BMTU nor the GEU is included in the current medicine SADI.
OCR for page 215
Physician Staffing for the VA: Volume I Intermediate Care: ADC = 1: 0.54 hr6 Geriatric Evaluation Unit (GEU): ADC = 6: 3.07 hr5 Subtotal for Routine Care = 22.56 hr Special Procedures Physician hours is the product of procedures per day and the panel's median estimate of physician time per procedure, given resident availability. The former is supplied by the VAMC; the latter is from Chart 7 of Figure 5.2. Cardiac Caths: 1.5 Caths/day × 1.50 hr/cath = 2.25 hr Endoscopies: 6 Endos/day × 0.70 hr/endo = 4.20 hr Bronchoscopies: 3.5 Bronchos/day × 0.87 hr/broncho = 3.03 hr Subtotal for Special Procedures = 9.48 hr Subtotal for Medicine Inpatient PCA: 40.04 hr/day Consultations Physician hours is the product of consults per day and the panel's median estimate of physician time per consult, given resident availability. The former is supplied by the facility; the latter is from either Chart 5 or Chart 6 of Figure 5.2, depending on whether the consult is "initial" or "follow-up." Surgery Inpatient PCA: 18.50 consults/day7 Initial: 9.25 visit8× 0.50 hr/visit = 4.63 hr Follow-up: 9.25 visit × 0.25 hr/visit = 2.31 hr Subtotal 6.94 hr/day 6 Assumes Routine Daily Patient Care time same as for medicine wards in Chart 3 of Figure 5.2. 7 Average daily consult or visit rate by medicine service physicians, as reported by VAMC I. Consults or visits on a given day may be above or below this average figure. 8 Assumes 50 percent of visits are "initial" consults and 50 percent are "follow-up." Physician times per initial consult are found in Chart 5 and Chart 6, respectively, of Figure 5.2.
OCR for page 216
Physician Staffing for the VA: Volume I Neurology Inpatient PCA: 1.85 consults/day7 Initial: 0.92 visit8× 0.50 hr/visit = 0.46 hr Follow-up: 0.92 visit × 0.25 hr/visit = 0.23 hr Subtotal 0.69 hr/day Psychiatry Inpatient PCA: 5.54 consults/day7 Initial: 2.77 visit8× 0.50 hr/visit = 1.39 hr Follow-up: 2.77 visit × 0.25 hr/visit = 0.69 hr Subtotal 2.08 hr/day Rehabilitation Medicine Inpatient PCA: 1.85 consults/day7 Initial: 0.92 visit8× 0.37 hr/visit = 0.34 hr Follow-up: 0.92 visit × 0.25 hr/visit = 0.23 hr Subtotal 0.57 hr/day Spinal Cord Injury PCA: 0.58 consults/day7 Initial: 0.29 visit8× 0.50 hr/visit9 = 0.15 hr Follow-up: 0.29 visit × 0.25 hr/visit9 = 0.07 hr Subtotal 0.22hr/day Nursing Home PCA: VAMC I reports 0 consults Subtotal for Consultations: 10.50 hr/day Ambulatory Visits Physician hours is the product of visits per day and the panel's median estimate of physician time per visit. The former is supplied by the VAMC; the latter is from Chart 8, expressed as a function of whether the particular clinic operates with or without residents and with or without physician assistants and nurse practitioners. 9 Based on median consult times to surgery service, since SCI not included in current medicine SADI.
OCR for page 217
Physician Staffing for the VA: Volume I General Medicine: 100 visit/day7 Residents and NPs available. Initial: 20 visit10× 0.50 hr/visit = 10.00 hr Follow-up: 80 visit × 0.25 hr/visit = 20.00 hr Subtotal 30.00 hr/day General Medicine Follow-up: 18 visit/day7 NPs available. Initial: 3.6 visit10× 0.70 hr/visit = 2.52 hr Follow-up 14.4 visit × 0.25 hr/visit = 3.60 hr Subtotal 6.12 hr/day Cardiology: 13.6 visit/day7 Initial: 2.72 visit10 × 0.50 hr/visit = 1.36 hr Follow-up: 10.88 visit × 0.25 hr/visit = 2.72 hr Subtotal 4.08 hr/day Dermatology: 17 visit/day7 Initial: 3.40 visit10 x 0.50 hr/visit = 1.70 hr Follow-up: 13.60 visit x 0.25 hr/visit = 3.40 hr Subtotal 5.10hr/day Endocrine: 6.4 visit/day7 Initial: 1.28 visit10 × 0.50 hr/visit = 0.64 hr Follow-up: 5.12 visit × 0.25 hr/visit = 1.28 hr Subtotal 1.92 hr/day 10 Assume 20 percent of ambulatory care visits involve new patients and 80 percent are for follow-up. Physician times per ambulatory visit are in Chart 8 of Figure 5.2.
OCR for page 218
Physician Staffing for the VA: Volume I Gastrointestinal: 8.4 visit/day7 Initial: 1.68 visit10× 0.50 hr/visit = 0.84 hr Follow-up: 6.72 visit × 0.25 hr/visit = 1.68 hr Subtotal 2.52 hr/day Hypertension: 8.4 visit/day7 NPs available Initial: 1.68 visit10 × 0.70 hr/visit = 1.18 hr Follow-up: 6.72 visit × 0.25 hr/visit = 1.68 hr Subtotal 2.86 hr/day Pulmonary: 12.6 visit/day7 Initial: 2.52 visit10 × 0.50 hr/visit = 1.26 hr Follow-up: 10.08 visit × 0.25 hr/visit = 2.52 hr Subtotal 3.78 hr/day Renal: 4.8 visit/day7 Initial: 0.91 visit10 × 0.50 hr/visit = 0.48 hr Follow-up: 3.84 visit × 0.25 hr/visit = 0.96 hr Subtotal 1.44 hr/day Dialysis: 10.6 visit/day7 Initial: 2.12 visit10 × 0.50 hr/visit = 1.06 hr Follow-up: 8.48 visit × 0.25 hr/visit = 2.12 hr Subtotal 3.18 hr/day Rheumatology: 7.6 visit/day7 Initial: 1.52 visit10 × 0.50 hr/visit = 0.76 hr Follow-up: 6.08 visit × 0.25 hr/visit = 1.52 hr Subtotal 2.28 hr/day
OCR for page 219
Physician Staffing for the VA: Volume I Oncology: 8.6 visit/day7 Initial: 1.72 visit10 × 0.50 hr/visit = 0.88 hr Follow-up: 6.88 visit × 0.25 hr/visit = 1.72 hr Subtotal 2.60 hr/day Subtotal for Ambulatory Visits (excluding Comp & Pensions Exams11): 65.88 hr/day Total Section A Hours: 116.42 hr/day Total Section A FTEE (assuming 40 hr/week equivalence): 116.42 hr/day ÷ 8 hr/day/FTEE 14.6 FTEE At its second meeting, the medicine panel agreed that no additional FTEE need be purchased for night and weekend coverage. 11 At VAMC I, Compensation and Pension Examinations are not performed by VA staff physicians, but externally through contract arrangements.
OCR for page 220
Physician Staffing for the VA: Volume I FOR SECTION B: NON-PATIENT-CARE ACTIVITIES Didactic instruction of residents (not on PCAs), administration, and other hospital-related, non-patient-care activities: For Service Chief12 For All Other Staff Physicians13 3.5 hr/day 1.5 hr/day × (14.6-1) = 20.4 hr/day Subtotal = 3.5 + 20.4 = 23.9, which implies 23.9/8 = 3.0 FTEE Total (to this point) = 14.6 + 3.0 = 17.6 FTEE. Next, the panel's median estimates for percentage of time to be devoted to continuing education (8%), research (34%), and vacation, administrative leave, sick leave, and other (13%) lead to an overall median estimate of 54% for the percentage of total medicine service time allocated to these activities.14 Hence, total FTEE for the medicine service at VAMC I = 17.6/(1-0.54) = 38.3 This implies that about 38.3 × 0.34 = 13.0 FTEE would be devoted to research, and 38.3 × 0.08 = 3.1 FTEE to continuing education. At its second meeting, the panel's median estimate of additional FTEE desired from Consulting & Attending and Without-Compensation physicians was 1.5. If these are included, the desired FTEE total is 38.3 + 1.5 = 39.8. 12 Estimate assumes that, among the three FTEE categories of administration, resident classroom instructions, and other hospital-related non-patient-care activities, the service chief's time is concentrated in administration and only minimally devoted to the other two. See Chart 9 in Part 1, under Non-Patient-Care Activities, in Figure 5.2. 13 Estimate derived by multiplying the median estimate of total time for the three categories (i.e., 1.5 hr/day) by the number of patient-care-related FTEE, minus the assumed full-time service chief [i.e., by (14.6-1) = 13.6]. See Chart 9 in Section B, Part 1, under Non-Patient-Care Activities, in Figure 5.2. There are other plausible ways to compute this. 14 See Chart 10 in Part 2 under Non-Patient-Care Activities in Figure 5.2.
Representative terms from entire chapter: